00:00:00.001 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 202 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3704 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.016 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.017 The recommended git tool is: git 00:00:00.017 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.032 Fetching changes from the remote Git repository 00:00:00.035 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.051 Using shallow fetch with depth 1 00:00:00.051 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.051 > git --version # timeout=10 00:00:00.080 > git --version # 'git version 2.39.2' 00:00:00.080 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.117 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.117 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.250 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.262 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.275 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.275 > git config core.sparsecheckout # timeout=10 00:00:02.287 > git read-tree -mu HEAD # timeout=10 00:00:02.304 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.332 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.332 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.521 [Pipeline] Start of Pipeline 00:00:02.533 [Pipeline] library 00:00:02.534 Loading library shm_lib@master 00:00:02.535 Library shm_lib@master is cached. Copying from home. 00:00:02.548 [Pipeline] node 00:00:02.560 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.561 [Pipeline] { 00:00:02.569 [Pipeline] catchError 00:00:02.570 [Pipeline] { 00:00:02.581 [Pipeline] wrap 00:00:02.588 [Pipeline] { 00:00:02.595 [Pipeline] stage 00:00:02.596 [Pipeline] { (Prologue) 00:00:02.612 [Pipeline] echo 00:00:02.613 Node: VM-host-WFP7 00:00:02.618 [Pipeline] cleanWs 00:00:02.627 [WS-CLEANUP] Deleting project workspace... 00:00:02.627 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.634 [WS-CLEANUP] done 00:00:02.805 [Pipeline] setCustomBuildProperty 00:00:02.870 [Pipeline] httpRequest 00:00:03.188 [Pipeline] echo 00:00:03.189 Sorcerer 10.211.164.20 is alive 00:00:03.199 [Pipeline] retry 00:00:03.201 [Pipeline] { 00:00:03.213 [Pipeline] httpRequest 00:00:03.218 HttpMethod: GET 00:00:03.219 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.219 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.220 Response Code: HTTP/1.1 200 OK 00:00:03.221 Success: Status code 200 is in the accepted range: 200,404 00:00:03.221 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.367 [Pipeline] } 00:00:03.385 [Pipeline] // retry 00:00:03.391 [Pipeline] sh 00:00:03.689 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.706 [Pipeline] httpRequest 00:00:04.265 [Pipeline] echo 00:00:04.267 Sorcerer 10.211.164.20 is alive 00:00:04.275 [Pipeline] retry 00:00:04.277 [Pipeline] { 00:00:04.289 [Pipeline] httpRequest 00:00:04.293 HttpMethod: GET 00:00:04.293 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.294 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:04.294 Response Code: HTTP/1.1 200 OK 00:00:04.295 Success: Status code 200 is in the accepted range: 200,404 00:00:04.295 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:15.699 [Pipeline] } 00:00:15.749 [Pipeline] // retry 00:00:15.754 [Pipeline] sh 00:00:16.032 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:18.583 [Pipeline] sh 00:00:18.868 + git -C spdk log --oneline -n5 00:00:18.868 b18e1bd62 version: v24.09.1-pre 00:00:18.868 19524ad45 version: v24.09 00:00:18.868 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:00:18.868 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:00:18.868 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:00:18.891 [Pipeline] withCredentials 00:00:18.904 > git --version # timeout=10 00:00:18.920 > git --version # 'git version 2.39.2' 00:00:18.939 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:18.941 [Pipeline] { 00:00:18.950 [Pipeline] retry 00:00:18.952 [Pipeline] { 00:00:18.968 [Pipeline] sh 00:00:19.254 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:19.529 [Pipeline] } 00:00:19.546 [Pipeline] // retry 00:00:19.552 [Pipeline] } 00:00:19.568 [Pipeline] // withCredentials 00:00:19.579 [Pipeline] httpRequest 00:00:19.994 [Pipeline] echo 00:00:19.996 Sorcerer 10.211.164.20 is alive 00:00:20.002 [Pipeline] retry 00:00:20.004 [Pipeline] { 00:00:20.016 [Pipeline] httpRequest 00:00:20.020 HttpMethod: GET 00:00:20.021 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:20.022 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:20.039 Response Code: HTTP/1.1 200 OK 00:00:20.039 Success: Status code 200 is in the accepted range: 200,404 00:00:20.040 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:51.899 [Pipeline] } 00:00:51.917 [Pipeline] // retry 00:00:51.924 [Pipeline] sh 00:00:52.211 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:53.610 [Pipeline] sh 00:00:53.897 + git -C dpdk log --oneline -n5 00:00:53.897 caf0f5d395 version: 22.11.4 00:00:53.897 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:53.897 dc9c799c7d vhost: fix missing spinlock unlock 00:00:53.897 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:53.897 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:53.918 [Pipeline] writeFile 00:00:53.938 [Pipeline] sh 00:00:54.243 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:54.259 [Pipeline] sh 00:00:54.552 + cat autorun-spdk.conf 00:00:54.552 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.552 SPDK_RUN_ASAN=1 00:00:54.552 SPDK_RUN_UBSAN=1 00:00:54.552 SPDK_TEST_RAID=1 00:00:54.552 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.552 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:54.552 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:54.560 RUN_NIGHTLY=1 00:00:54.562 [Pipeline] } 00:00:54.581 [Pipeline] // stage 00:00:54.599 [Pipeline] stage 00:00:54.602 [Pipeline] { (Run VM) 00:00:54.617 [Pipeline] sh 00:00:54.903 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:54.903 + echo 'Start stage prepare_nvme.sh' 00:00:54.903 Start stage prepare_nvme.sh 00:00:54.903 + [[ -n 2 ]] 00:00:54.903 + disk_prefix=ex2 00:00:54.903 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:54.903 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:54.903 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:54.903 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:54.903 ++ SPDK_RUN_ASAN=1 00:00:54.903 ++ SPDK_RUN_UBSAN=1 00:00:54.903 ++ SPDK_TEST_RAID=1 00:00:54.903 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:54.903 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:54.903 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:54.903 ++ RUN_NIGHTLY=1 00:00:54.903 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:54.903 + nvme_files=() 00:00:54.903 + declare -A nvme_files 00:00:54.903 + backend_dir=/var/lib/libvirt/images/backends 00:00:54.903 + nvme_files['nvme.img']=5G 00:00:54.903 + nvme_files['nvme-cmb.img']=5G 00:00:54.903 + nvme_files['nvme-multi0.img']=4G 00:00:54.903 + nvme_files['nvme-multi1.img']=4G 00:00:54.903 + nvme_files['nvme-multi2.img']=4G 00:00:54.903 + nvme_files['nvme-openstack.img']=8G 00:00:54.903 + nvme_files['nvme-zns.img']=5G 00:00:54.903 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:54.903 + (( SPDK_TEST_FTL == 1 )) 00:00:54.903 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:54.903 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:54.903 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:54.903 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:54.903 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:54.903 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:54.903 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:54.903 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:54.903 + for nvme in "${!nvme_files[@]}" 00:00:54.903 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:55.163 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:55.163 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:55.163 + echo 'End stage prepare_nvme.sh' 00:00:55.163 End stage prepare_nvme.sh 00:00:55.177 [Pipeline] sh 00:00:55.463 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:55.463 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:00:55.463 00:00:55.463 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:55.463 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:55.463 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:55.463 HELP=0 00:00:55.463 DRY_RUN=0 00:00:55.463 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:55.463 NVME_DISKS_TYPE=nvme,nvme, 00:00:55.463 NVME_AUTO_CREATE=0 00:00:55.463 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:55.463 NVME_CMB=,, 00:00:55.463 NVME_PMR=,, 00:00:55.463 NVME_ZNS=,, 00:00:55.463 NVME_MS=,, 00:00:55.463 NVME_FDP=,, 00:00:55.463 SPDK_VAGRANT_DISTRO=fedora39 00:00:55.463 SPDK_VAGRANT_VMCPU=10 00:00:55.463 SPDK_VAGRANT_VMRAM=12288 00:00:55.463 SPDK_VAGRANT_PROVIDER=libvirt 00:00:55.463 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:55.463 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:55.463 SPDK_OPENSTACK_NETWORK=0 00:00:55.463 VAGRANT_PACKAGE_BOX=0 00:00:55.463 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:55.463 FORCE_DISTRO=true 00:00:55.463 VAGRANT_BOX_VERSION= 00:00:55.463 EXTRA_VAGRANTFILES= 00:00:55.463 NIC_MODEL=virtio 00:00:55.463 00:00:55.463 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:55.463 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:57.375 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.635 ==> default: Creating image (snapshot of base box volume). 00:00:57.895 ==> default: Creating domain with the following settings... 00:00:57.895 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733535903_1f5e131854c5808ff2bf 00:00:57.895 ==> default: -- Domain type: kvm 00:00:57.895 ==> default: -- Cpus: 10 00:00:57.895 ==> default: -- Feature: acpi 00:00:57.895 ==> default: -- Feature: apic 00:00:57.895 ==> default: -- Feature: pae 00:00:57.895 ==> default: -- Memory: 12288M 00:00:57.895 ==> default: -- Memory Backing: hugepages: 00:00:57.895 ==> default: -- Management MAC: 00:00:57.895 ==> default: -- Loader: 00:00:57.895 ==> default: -- Nvram: 00:00:57.895 ==> default: -- Base box: spdk/fedora39 00:00:57.895 ==> default: -- Storage pool: default 00:00:57.895 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733535903_1f5e131854c5808ff2bf.img (20G) 00:00:57.895 ==> default: -- Volume Cache: default 00:00:57.895 ==> default: -- Kernel: 00:00:57.895 ==> default: -- Initrd: 00:00:57.895 ==> default: -- Graphics Type: vnc 00:00:57.895 ==> default: -- Graphics Port: -1 00:00:57.895 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.895 ==> default: -- Graphics Password: Not defined 00:00:57.895 ==> default: -- Video Type: cirrus 00:00:57.895 ==> default: -- Video VRAM: 9216 00:00:57.895 ==> default: -- Sound Type: 00:00:57.895 ==> default: -- Keymap: en-us 00:00:57.895 ==> default: -- TPM Path: 00:00:57.895 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.895 ==> default: -- Command line args: 00:00:57.895 ==> default: -> value=-device, 00:00:57.895 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:57.895 ==> default: -> value=-drive, 00:00:57.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:57.895 ==> default: -> value=-device, 00:00:57.895 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.895 ==> default: -> value=-device, 00:00:57.895 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:57.895 ==> default: -> value=-drive, 00:00:57.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:57.895 ==> default: -> value=-device, 00:00:57.895 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.895 ==> default: -> value=-drive, 00:00:57.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:57.895 ==> default: -> value=-device, 00:00:57.895 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.895 ==> default: -> value=-drive, 00:00:57.895 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:57.895 ==> default: -> value=-device, 00:00:57.895 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.155 ==> default: Creating shared folders metadata... 00:00:58.155 ==> default: Starting domain. 00:00:59.539 ==> default: Waiting for domain to get an IP address... 00:01:17.650 ==> default: Waiting for SSH to become available... 00:01:17.650 ==> default: Configuring and enabling network interfaces... 00:01:22.937 default: SSH address: 192.168.121.82:22 00:01:22.937 default: SSH username: vagrant 00:01:22.937 default: SSH auth method: private key 00:01:25.481 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:33.694 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:38.977 ==> default: Mounting SSHFS shared folder... 00:01:41.530 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:41.531 ==> default: Checking Mount.. 00:01:42.910 ==> default: Folder Successfully Mounted! 00:01:42.910 ==> default: Running provisioner: file... 00:01:44.307 default: ~/.gitconfig => .gitconfig 00:01:44.875 00:01:44.875 SUCCESS! 00:01:44.875 00:01:44.875 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:44.875 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:44.875 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:44.875 00:01:44.886 [Pipeline] } 00:01:44.906 [Pipeline] // stage 00:01:44.917 [Pipeline] dir 00:01:44.918 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:44.919 [Pipeline] { 00:01:44.932 [Pipeline] catchError 00:01:44.934 [Pipeline] { 00:01:44.950 [Pipeline] sh 00:01:45.242 + vagrant ssh-config --host vagrant 00:01:45.242 + sed -ne /^Host/,$p 00:01:45.242 + tee ssh_conf 00:01:47.782 Host vagrant 00:01:47.782 HostName 192.168.121.82 00:01:47.782 User vagrant 00:01:47.782 Port 22 00:01:47.782 UserKnownHostsFile /dev/null 00:01:47.782 StrictHostKeyChecking no 00:01:47.782 PasswordAuthentication no 00:01:47.782 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:47.782 IdentitiesOnly yes 00:01:47.782 LogLevel FATAL 00:01:47.782 ForwardAgent yes 00:01:47.782 ForwardX11 yes 00:01:47.782 00:01:47.797 [Pipeline] withEnv 00:01:47.799 [Pipeline] { 00:01:47.813 [Pipeline] sh 00:01:48.095 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:48.096 source /etc/os-release 00:01:48.096 [[ -e /image.version ]] && img=$(< /image.version) 00:01:48.096 # Minimal, systemd-like check. 00:01:48.096 if [[ -e /.dockerenv ]]; then 00:01:48.096 # Clear garbage from the node's name: 00:01:48.096 # agt-er_autotest_547-896 -> autotest_547-896 00:01:48.096 # $HOSTNAME is the actual container id 00:01:48.096 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:48.096 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:48.096 # We can assume this is a mount from a host where container is running, 00:01:48.096 # so fetch its hostname to easily identify the target swarm worker. 00:01:48.096 container="$(< /etc/hostname) ($agent)" 00:01:48.096 else 00:01:48.096 # Fallback 00:01:48.096 container=$agent 00:01:48.096 fi 00:01:48.096 fi 00:01:48.096 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:48.096 00:01:48.370 [Pipeline] } 00:01:48.403 [Pipeline] // withEnv 00:01:48.436 [Pipeline] setCustomBuildProperty 00:01:48.461 [Pipeline] stage 00:01:48.464 [Pipeline] { (Tests) 00:01:48.481 [Pipeline] sh 00:01:48.763 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:49.036 [Pipeline] sh 00:01:49.320 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:49.595 [Pipeline] timeout 00:01:49.595 Timeout set to expire in 1 hr 30 min 00:01:49.597 [Pipeline] { 00:01:49.612 [Pipeline] sh 00:01:49.894 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:50.463 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:01:50.476 [Pipeline] sh 00:01:50.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:51.037 [Pipeline] sh 00:01:51.319 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:51.594 [Pipeline] sh 00:01:51.876 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:52.136 ++ readlink -f spdk_repo 00:01:52.136 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:52.136 + [[ -n /home/vagrant/spdk_repo ]] 00:01:52.136 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:52.136 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:52.136 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:52.136 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:52.136 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:52.136 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:52.136 + cd /home/vagrant/spdk_repo 00:01:52.136 + source /etc/os-release 00:01:52.136 ++ NAME='Fedora Linux' 00:01:52.136 ++ VERSION='39 (Cloud Edition)' 00:01:52.136 ++ ID=fedora 00:01:52.136 ++ VERSION_ID=39 00:01:52.136 ++ VERSION_CODENAME= 00:01:52.136 ++ PLATFORM_ID=platform:f39 00:01:52.136 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.136 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.136 ++ LOGO=fedora-logo-icon 00:01:52.136 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.136 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.136 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.136 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.136 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.136 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.136 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.136 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.136 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.136 ++ SUPPORT_END=2024-11-12 00:01:52.136 ++ VARIANT='Cloud Edition' 00:01:52.136 ++ VARIANT_ID=cloud 00:01:52.136 + uname -a 00:01:52.136 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.136 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:52.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:52.705 Hugepages 00:01:52.705 node hugesize free / total 00:01:52.705 node0 1048576kB 0 / 0 00:01:52.705 node0 2048kB 0 / 0 00:01:52.705 00:01:52.705 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:52.705 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:52.705 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:52.705 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:52.705 + rm -f /tmp/spdk-ld-path 00:01:52.705 + source autorun-spdk.conf 00:01:52.705 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.705 ++ SPDK_RUN_ASAN=1 00:01:52.705 ++ SPDK_RUN_UBSAN=1 00:01:52.705 ++ SPDK_TEST_RAID=1 00:01:52.705 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:52.705 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.705 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.705 ++ RUN_NIGHTLY=1 00:01:52.705 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:52.705 + [[ -n '' ]] 00:01:52.705 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:52.705 + for M in /var/spdk/build-*-manifest.txt 00:01:52.705 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:52.705 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.705 + for M in /var/spdk/build-*-manifest.txt 00:01:52.705 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:52.705 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.965 + for M in /var/spdk/build-*-manifest.txt 00:01:52.965 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:52.965 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:52.965 ++ uname 00:01:52.965 + [[ Linux == \L\i\n\u\x ]] 00:01:52.965 + sudo dmesg -T 00:01:52.965 + sudo dmesg --clear 00:01:52.965 + dmesg_pid=6163 00:01:52.965 + [[ Fedora Linux == FreeBSD ]] 00:01:52.965 + sudo dmesg -Tw 00:01:52.965 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.965 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:52.965 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:52.965 + [[ -x /usr/src/fio-static/fio ]] 00:01:52.965 + export FIO_BIN=/usr/src/fio-static/fio 00:01:52.965 + FIO_BIN=/usr/src/fio-static/fio 00:01:52.965 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:52.965 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:52.965 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:52.965 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.965 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:52.965 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:52.965 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.965 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:52.965 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:52.965 Test configuration: 00:01:52.965 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.965 SPDK_RUN_ASAN=1 00:01:52.965 SPDK_RUN_UBSAN=1 00:01:52.965 SPDK_TEST_RAID=1 00:01:52.965 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:52.965 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:52.965 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:52.965 RUN_NIGHTLY=1 01:45:58 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:01:52.965 01:45:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:52.965 01:45:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:52.965 01:45:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:52.965 01:45:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:52.965 01:45:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:52.965 01:45:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.965 01:45:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.965 01:45:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.965 01:45:58 -- paths/export.sh@5 -- $ export PATH 00:01:52.965 01:45:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:52.965 01:45:58 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:52.965 01:45:58 -- common/autobuild_common.sh@479 -- $ date +%s 00:01:53.225 01:45:58 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733535958.XXXXXX 00:01:53.225 01:45:58 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733535958.8Y94Ir 00:01:53.225 01:45:58 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:01:53.225 01:45:58 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:01:53.225 01:45:58 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:53.225 01:45:58 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:53.225 01:45:58 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:53.225 01:45:58 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.225 01:45:58 -- common/autobuild_common.sh@495 -- $ get_config_params 00:01:53.225 01:45:58 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:53.225 01:45:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.225 01:45:58 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:53.225 01:45:58 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:01:53.225 01:45:58 -- pm/common@17 -- $ local monitor 00:01:53.225 01:45:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.225 01:45:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.225 01:45:58 -- pm/common@25 -- $ sleep 1 00:01:53.225 01:45:58 -- pm/common@21 -- $ date +%s 00:01:53.225 01:45:58 -- pm/common@21 -- $ date +%s 00:01:53.225 01:45:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733535958 00:01:53.225 01:45:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733535958 00:01:53.225 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733535958_collect-cpu-load.pm.log 00:01:53.225 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733535958_collect-vmstat.pm.log 00:01:54.162 01:45:59 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:01:54.162 01:45:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:54.162 01:45:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:54.162 01:45:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:54.162 01:45:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:54.162 Sat Dec 7 01:45:59 AM UTC 2024 00:01:54.162 01:45:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:54.162 v24.09-1-gb18e1bd62 00:01:54.162 01:45:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:54.162 01:45:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:54.162 01:45:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.162 01:45:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.162 01:45:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.162 ************************************ 00:01:54.162 START TEST asan 00:01:54.162 ************************************ 00:01:54.162 using asan 00:01:54.162 01:45:59 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:01:54.162 00:01:54.162 real 0m0.000s 00:01:54.162 user 0m0.000s 00:01:54.162 sys 0m0.000s 00:01:54.162 01:45:59 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.162 01:45:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.162 ************************************ 00:01:54.162 END TEST asan 00:01:54.162 ************************************ 00:01:54.162 01:45:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:54.162 01:45:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:54.162 01:45:59 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:54.162 01:45:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.162 01:45:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.162 ************************************ 00:01:54.162 START TEST ubsan 00:01:54.162 ************************************ 00:01:54.162 using ubsan 00:01:54.162 01:45:59 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:54.162 00:01:54.162 real 0m0.000s 00:01:54.162 user 0m0.000s 00:01:54.162 sys 0m0.000s 00:01:54.162 01:45:59 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:54.162 01:45:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:54.162 ************************************ 00:01:54.162 END TEST ubsan 00:01:54.162 ************************************ 00:01:54.422 01:45:59 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:54.422 01:45:59 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:54.422 01:45:59 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:54.422 01:45:59 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:01:54.422 01:45:59 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:54.422 01:45:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:54.422 ************************************ 00:01:54.422 START TEST build_native_dpdk 00:01:54.422 ************************************ 00:01:54.422 01:45:59 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:01:54.422 01:45:59 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:54.422 01:45:59 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:54.423 caf0f5d395 version: 22.11.4 00:01:54.423 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:54.423 dc9c799c7d vhost: fix missing spinlock unlock 00:01:54.423 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:54.423 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:54.423 patching file config/rte_config.h 00:01:54.423 Hunk #1 succeeded at 60 (offset 1 line). 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:54.423 patching file lib/pcapng/rte_pcapng.c 00:01:54.423 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:54.423 01:45:59 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:54.423 01:45:59 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:01.013 The Meson build system 00:02:01.013 Version: 1.5.0 00:02:01.013 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:01.013 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:01.013 Build type: native build 00:02:01.013 Program cat found: YES (/usr/bin/cat) 00:02:01.013 Project name: DPDK 00:02:01.013 Project version: 22.11.4 00:02:01.013 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:01.013 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:01.013 Host machine cpu family: x86_64 00:02:01.013 Host machine cpu: x86_64 00:02:01.013 Message: ## Building in Developer Mode ## 00:02:01.013 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:01.013 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:01.013 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:01.013 Program objdump found: YES (/usr/bin/objdump) 00:02:01.013 Program python3 found: YES (/usr/bin/python3) 00:02:01.013 Program cat found: YES (/usr/bin/cat) 00:02:01.013 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:01.013 Checking for size of "void *" : 8 00:02:01.013 Checking for size of "void *" : 8 (cached) 00:02:01.013 Library m found: YES 00:02:01.013 Library numa found: YES 00:02:01.013 Has header "numaif.h" : YES 00:02:01.013 Library fdt found: NO 00:02:01.013 Library execinfo found: NO 00:02:01.013 Has header "execinfo.h" : YES 00:02:01.013 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:01.013 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:01.013 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:01.013 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:01.013 Run-time dependency openssl found: YES 3.1.1 00:02:01.013 Run-time dependency libpcap found: YES 1.10.4 00:02:01.013 Has header "pcap.h" with dependency libpcap: YES 00:02:01.013 Compiler for C supports arguments -Wcast-qual: YES 00:02:01.013 Compiler for C supports arguments -Wdeprecated: YES 00:02:01.013 Compiler for C supports arguments -Wformat: YES 00:02:01.013 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:01.013 Compiler for C supports arguments -Wformat-security: NO 00:02:01.013 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:01.013 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:01.013 Compiler for C supports arguments -Wnested-externs: YES 00:02:01.013 Compiler for C supports arguments -Wold-style-definition: YES 00:02:01.013 Compiler for C supports arguments -Wpointer-arith: YES 00:02:01.013 Compiler for C supports arguments -Wsign-compare: YES 00:02:01.013 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:01.013 Compiler for C supports arguments -Wundef: YES 00:02:01.013 Compiler for C supports arguments -Wwrite-strings: YES 00:02:01.013 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:01.013 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:01.013 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:01.013 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:01.013 Compiler for C supports arguments -mavx512f: YES 00:02:01.013 Checking if "AVX512 checking" compiles: YES 00:02:01.013 Fetching value of define "__SSE4_2__" : 1 00:02:01.013 Fetching value of define "__AES__" : 1 00:02:01.013 Fetching value of define "__AVX__" : 1 00:02:01.013 Fetching value of define "__AVX2__" : 1 00:02:01.013 Fetching value of define "__AVX512BW__" : 1 00:02:01.013 Fetching value of define "__AVX512CD__" : 1 00:02:01.013 Fetching value of define "__AVX512DQ__" : 1 00:02:01.013 Fetching value of define "__AVX512F__" : 1 00:02:01.013 Fetching value of define "__AVX512VL__" : 1 00:02:01.013 Fetching value of define "__PCLMUL__" : 1 00:02:01.013 Fetching value of define "__RDRND__" : 1 00:02:01.013 Fetching value of define "__RDSEED__" : 1 00:02:01.013 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:01.013 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:01.013 Message: lib/kvargs: Defining dependency "kvargs" 00:02:01.013 Message: lib/telemetry: Defining dependency "telemetry" 00:02:01.013 Checking for function "getentropy" : YES 00:02:01.013 Message: lib/eal: Defining dependency "eal" 00:02:01.013 Message: lib/ring: Defining dependency "ring" 00:02:01.013 Message: lib/rcu: Defining dependency "rcu" 00:02:01.013 Message: lib/mempool: Defining dependency "mempool" 00:02:01.013 Message: lib/mbuf: Defining dependency "mbuf" 00:02:01.013 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:01.013 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:01.013 Compiler for C supports arguments -mpclmul: YES 00:02:01.013 Compiler for C supports arguments -maes: YES 00:02:01.013 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.013 Compiler for C supports arguments -mavx512bw: YES 00:02:01.013 Compiler for C supports arguments -mavx512dq: YES 00:02:01.013 Compiler for C supports arguments -mavx512vl: YES 00:02:01.013 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:01.013 Compiler for C supports arguments -mavx2: YES 00:02:01.013 Compiler for C supports arguments -mavx: YES 00:02:01.013 Message: lib/net: Defining dependency "net" 00:02:01.013 Message: lib/meter: Defining dependency "meter" 00:02:01.013 Message: lib/ethdev: Defining dependency "ethdev" 00:02:01.013 Message: lib/pci: Defining dependency "pci" 00:02:01.013 Message: lib/cmdline: Defining dependency "cmdline" 00:02:01.013 Message: lib/metrics: Defining dependency "metrics" 00:02:01.013 Message: lib/hash: Defining dependency "hash" 00:02:01.013 Message: lib/timer: Defining dependency "timer" 00:02:01.013 Fetching value of define "__AVX2__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.013 Message: lib/acl: Defining dependency "acl" 00:02:01.013 Message: lib/bbdev: Defining dependency "bbdev" 00:02:01.013 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:01.013 Run-time dependency libelf found: YES 0.191 00:02:01.013 Message: lib/bpf: Defining dependency "bpf" 00:02:01.013 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:01.013 Message: lib/compressdev: Defining dependency "compressdev" 00:02:01.013 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:01.013 Message: lib/distributor: Defining dependency "distributor" 00:02:01.013 Message: lib/efd: Defining dependency "efd" 00:02:01.013 Message: lib/eventdev: Defining dependency "eventdev" 00:02:01.013 Message: lib/gpudev: Defining dependency "gpudev" 00:02:01.013 Message: lib/gro: Defining dependency "gro" 00:02:01.013 Message: lib/gso: Defining dependency "gso" 00:02:01.013 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:01.013 Message: lib/jobstats: Defining dependency "jobstats" 00:02:01.013 Message: lib/latencystats: Defining dependency "latencystats" 00:02:01.013 Message: lib/lpm: Defining dependency "lpm" 00:02:01.013 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:01.013 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:01.013 Message: lib/member: Defining dependency "member" 00:02:01.013 Message: lib/pcapng: Defining dependency "pcapng" 00:02:01.013 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:01.013 Message: lib/power: Defining dependency "power" 00:02:01.013 Message: lib/rawdev: Defining dependency "rawdev" 00:02:01.013 Message: lib/regexdev: Defining dependency "regexdev" 00:02:01.013 Message: lib/dmadev: Defining dependency "dmadev" 00:02:01.013 Message: lib/rib: Defining dependency "rib" 00:02:01.013 Message: lib/reorder: Defining dependency "reorder" 00:02:01.013 Message: lib/sched: Defining dependency "sched" 00:02:01.013 Message: lib/security: Defining dependency "security" 00:02:01.013 Message: lib/stack: Defining dependency "stack" 00:02:01.013 Has header "linux/userfaultfd.h" : YES 00:02:01.013 Message: lib/vhost: Defining dependency "vhost" 00:02:01.013 Message: lib/ipsec: Defining dependency "ipsec" 00:02:01.013 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:01.013 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.013 Message: lib/fib: Defining dependency "fib" 00:02:01.013 Message: lib/port: Defining dependency "port" 00:02:01.013 Message: lib/pdump: Defining dependency "pdump" 00:02:01.013 Message: lib/table: Defining dependency "table" 00:02:01.013 Message: lib/pipeline: Defining dependency "pipeline" 00:02:01.013 Message: lib/graph: Defining dependency "graph" 00:02:01.013 Message: lib/node: Defining dependency "node" 00:02:01.013 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:01.013 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:01.013 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:01.013 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:01.013 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:01.013 Compiler for C supports arguments -Wno-unused-value: YES 00:02:01.013 Compiler for C supports arguments -Wno-format: YES 00:02:01.013 Compiler for C supports arguments -Wno-format-security: YES 00:02:01.013 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:01.013 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:01.274 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:01.274 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:01.274 Fetching value of define "__AVX2__" : 1 (cached) 00:02:01.274 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.274 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.274 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.274 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:01.274 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:01.274 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:01.274 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:01.274 Configuring doxy-api.conf using configuration 00:02:01.274 Program sphinx-build found: NO 00:02:01.274 Configuring rte_build_config.h using configuration 00:02:01.274 Message: 00:02:01.274 ================= 00:02:01.274 Applications Enabled 00:02:01.274 ================= 00:02:01.274 00:02:01.274 apps: 00:02:01.274 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:01.274 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:01.274 test-security-perf, 00:02:01.274 00:02:01.274 Message: 00:02:01.274 ================= 00:02:01.274 Libraries Enabled 00:02:01.274 ================= 00:02:01.274 00:02:01.274 libs: 00:02:01.274 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:01.274 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:01.274 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:01.274 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:01.274 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:01.274 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:01.274 table, pipeline, graph, node, 00:02:01.274 00:02:01.274 Message: 00:02:01.274 =============== 00:02:01.274 Drivers Enabled 00:02:01.274 =============== 00:02:01.274 00:02:01.274 common: 00:02:01.274 00:02:01.274 bus: 00:02:01.274 pci, vdev, 00:02:01.274 mempool: 00:02:01.274 ring, 00:02:01.274 dma: 00:02:01.274 00:02:01.274 net: 00:02:01.274 i40e, 00:02:01.274 raw: 00:02:01.274 00:02:01.274 crypto: 00:02:01.274 00:02:01.274 compress: 00:02:01.274 00:02:01.274 regex: 00:02:01.274 00:02:01.274 vdpa: 00:02:01.274 00:02:01.274 event: 00:02:01.274 00:02:01.274 baseband: 00:02:01.274 00:02:01.274 gpu: 00:02:01.274 00:02:01.274 00:02:01.274 Message: 00:02:01.274 ================= 00:02:01.274 Content Skipped 00:02:01.274 ================= 00:02:01.274 00:02:01.274 apps: 00:02:01.274 00:02:01.274 libs: 00:02:01.274 kni: explicitly disabled via build config (deprecated lib) 00:02:01.274 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:01.274 00:02:01.274 drivers: 00:02:01.274 common/cpt: not in enabled drivers build config 00:02:01.274 common/dpaax: not in enabled drivers build config 00:02:01.274 common/iavf: not in enabled drivers build config 00:02:01.274 common/idpf: not in enabled drivers build config 00:02:01.274 common/mvep: not in enabled drivers build config 00:02:01.274 common/octeontx: not in enabled drivers build config 00:02:01.274 bus/auxiliary: not in enabled drivers build config 00:02:01.274 bus/dpaa: not in enabled drivers build config 00:02:01.274 bus/fslmc: not in enabled drivers build config 00:02:01.274 bus/ifpga: not in enabled drivers build config 00:02:01.274 bus/vmbus: not in enabled drivers build config 00:02:01.274 common/cnxk: not in enabled drivers build config 00:02:01.274 common/mlx5: not in enabled drivers build config 00:02:01.274 common/qat: not in enabled drivers build config 00:02:01.274 common/sfc_efx: not in enabled drivers build config 00:02:01.274 mempool/bucket: not in enabled drivers build config 00:02:01.274 mempool/cnxk: not in enabled drivers build config 00:02:01.274 mempool/dpaa: not in enabled drivers build config 00:02:01.274 mempool/dpaa2: not in enabled drivers build config 00:02:01.274 mempool/octeontx: not in enabled drivers build config 00:02:01.274 mempool/stack: not in enabled drivers build config 00:02:01.274 dma/cnxk: not in enabled drivers build config 00:02:01.274 dma/dpaa: not in enabled drivers build config 00:02:01.274 dma/dpaa2: not in enabled drivers build config 00:02:01.274 dma/hisilicon: not in enabled drivers build config 00:02:01.274 dma/idxd: not in enabled drivers build config 00:02:01.274 dma/ioat: not in enabled drivers build config 00:02:01.274 dma/skeleton: not in enabled drivers build config 00:02:01.274 net/af_packet: not in enabled drivers build config 00:02:01.274 net/af_xdp: not in enabled drivers build config 00:02:01.274 net/ark: not in enabled drivers build config 00:02:01.274 net/atlantic: not in enabled drivers build config 00:02:01.274 net/avp: not in enabled drivers build config 00:02:01.274 net/axgbe: not in enabled drivers build config 00:02:01.274 net/bnx2x: not in enabled drivers build config 00:02:01.274 net/bnxt: not in enabled drivers build config 00:02:01.274 net/bonding: not in enabled drivers build config 00:02:01.275 net/cnxk: not in enabled drivers build config 00:02:01.275 net/cxgbe: not in enabled drivers build config 00:02:01.275 net/dpaa: not in enabled drivers build config 00:02:01.275 net/dpaa2: not in enabled drivers build config 00:02:01.275 net/e1000: not in enabled drivers build config 00:02:01.275 net/ena: not in enabled drivers build config 00:02:01.275 net/enetc: not in enabled drivers build config 00:02:01.275 net/enetfec: not in enabled drivers build config 00:02:01.275 net/enic: not in enabled drivers build config 00:02:01.275 net/failsafe: not in enabled drivers build config 00:02:01.275 net/fm10k: not in enabled drivers build config 00:02:01.275 net/gve: not in enabled drivers build config 00:02:01.275 net/hinic: not in enabled drivers build config 00:02:01.275 net/hns3: not in enabled drivers build config 00:02:01.275 net/iavf: not in enabled drivers build config 00:02:01.275 net/ice: not in enabled drivers build config 00:02:01.275 net/idpf: not in enabled drivers build config 00:02:01.275 net/igc: not in enabled drivers build config 00:02:01.275 net/ionic: not in enabled drivers build config 00:02:01.275 net/ipn3ke: not in enabled drivers build config 00:02:01.275 net/ixgbe: not in enabled drivers build config 00:02:01.275 net/kni: not in enabled drivers build config 00:02:01.275 net/liquidio: not in enabled drivers build config 00:02:01.275 net/mana: not in enabled drivers build config 00:02:01.275 net/memif: not in enabled drivers build config 00:02:01.275 net/mlx4: not in enabled drivers build config 00:02:01.275 net/mlx5: not in enabled drivers build config 00:02:01.275 net/mvneta: not in enabled drivers build config 00:02:01.275 net/mvpp2: not in enabled drivers build config 00:02:01.275 net/netvsc: not in enabled drivers build config 00:02:01.275 net/nfb: not in enabled drivers build config 00:02:01.275 net/nfp: not in enabled drivers build config 00:02:01.275 net/ngbe: not in enabled drivers build config 00:02:01.275 net/null: not in enabled drivers build config 00:02:01.275 net/octeontx: not in enabled drivers build config 00:02:01.275 net/octeon_ep: not in enabled drivers build config 00:02:01.275 net/pcap: not in enabled drivers build config 00:02:01.275 net/pfe: not in enabled drivers build config 00:02:01.275 net/qede: not in enabled drivers build config 00:02:01.275 net/ring: not in enabled drivers build config 00:02:01.275 net/sfc: not in enabled drivers build config 00:02:01.275 net/softnic: not in enabled drivers build config 00:02:01.275 net/tap: not in enabled drivers build config 00:02:01.275 net/thunderx: not in enabled drivers build config 00:02:01.275 net/txgbe: not in enabled drivers build config 00:02:01.275 net/vdev_netvsc: not in enabled drivers build config 00:02:01.275 net/vhost: not in enabled drivers build config 00:02:01.275 net/virtio: not in enabled drivers build config 00:02:01.275 net/vmxnet3: not in enabled drivers build config 00:02:01.275 raw/cnxk_bphy: not in enabled drivers build config 00:02:01.275 raw/cnxk_gpio: not in enabled drivers build config 00:02:01.275 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:01.275 raw/ifpga: not in enabled drivers build config 00:02:01.275 raw/ntb: not in enabled drivers build config 00:02:01.275 raw/skeleton: not in enabled drivers build config 00:02:01.275 crypto/armv8: not in enabled drivers build config 00:02:01.275 crypto/bcmfs: not in enabled drivers build config 00:02:01.275 crypto/caam_jr: not in enabled drivers build config 00:02:01.275 crypto/ccp: not in enabled drivers build config 00:02:01.275 crypto/cnxk: not in enabled drivers build config 00:02:01.275 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.275 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.275 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.275 crypto/mlx5: not in enabled drivers build config 00:02:01.275 crypto/mvsam: not in enabled drivers build config 00:02:01.275 crypto/nitrox: not in enabled drivers build config 00:02:01.275 crypto/null: not in enabled drivers build config 00:02:01.275 crypto/octeontx: not in enabled drivers build config 00:02:01.275 crypto/openssl: not in enabled drivers build config 00:02:01.275 crypto/scheduler: not in enabled drivers build config 00:02:01.275 crypto/uadk: not in enabled drivers build config 00:02:01.275 crypto/virtio: not in enabled drivers build config 00:02:01.275 compress/isal: not in enabled drivers build config 00:02:01.275 compress/mlx5: not in enabled drivers build config 00:02:01.275 compress/octeontx: not in enabled drivers build config 00:02:01.275 compress/zlib: not in enabled drivers build config 00:02:01.275 regex/mlx5: not in enabled drivers build config 00:02:01.275 regex/cn9k: not in enabled drivers build config 00:02:01.275 vdpa/ifc: not in enabled drivers build config 00:02:01.275 vdpa/mlx5: not in enabled drivers build config 00:02:01.275 vdpa/sfc: not in enabled drivers build config 00:02:01.275 event/cnxk: not in enabled drivers build config 00:02:01.275 event/dlb2: not in enabled drivers build config 00:02:01.275 event/dpaa: not in enabled drivers build config 00:02:01.275 event/dpaa2: not in enabled drivers build config 00:02:01.275 event/dsw: not in enabled drivers build config 00:02:01.275 event/opdl: not in enabled drivers build config 00:02:01.275 event/skeleton: not in enabled drivers build config 00:02:01.275 event/sw: not in enabled drivers build config 00:02:01.275 event/octeontx: not in enabled drivers build config 00:02:01.275 baseband/acc: not in enabled drivers build config 00:02:01.275 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:01.275 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:01.275 baseband/la12xx: not in enabled drivers build config 00:02:01.275 baseband/null: not in enabled drivers build config 00:02:01.275 baseband/turbo_sw: not in enabled drivers build config 00:02:01.275 gpu/cuda: not in enabled drivers build config 00:02:01.275 00:02:01.275 00:02:01.275 Build targets in project: 311 00:02:01.275 00:02:01.275 DPDK 22.11.4 00:02:01.275 00:02:01.275 User defined options 00:02:01.275 libdir : lib 00:02:01.275 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:01.275 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:01.275 c_link_args : 00:02:01.275 enable_docs : false 00:02:01.275 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:01.275 enable_kmods : false 00:02:01.275 machine : native 00:02:01.275 tests : false 00:02:01.275 00:02:01.275 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.275 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:01.275 01:46:06 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:01.535 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:01.535 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:01.535 [2/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:01.535 [3/740] Generating lib/rte_kvargs_def with a custom command 00:02:01.535 [4/740] Generating lib/rte_telemetry_def with a custom command 00:02:01.535 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.535 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.535 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.535 [8/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.535 [9/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.535 [10/740] Linking static target lib/librte_kvargs.a 00:02:01.535 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.535 [12/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.535 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.535 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.796 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.796 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.796 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.796 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.796 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.796 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.796 [21/740] Linking target lib/librte_kvargs.so.23.0 00:02:01.796 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.796 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:01.796 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.796 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.796 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.796 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.796 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.796 [29/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.796 [30/740] Linking static target lib/librte_telemetry.a 00:02:02.055 [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.055 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.055 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.055 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:02.055 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.055 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:02.055 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:02.055 [38/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:02.055 [39/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:02.055 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:02.055 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.315 [42/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.315 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.315 [44/740] Linking target lib/librte_telemetry.so.23.0 00:02:02.315 [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.315 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.315 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.315 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.315 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.315 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:02.315 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.315 [52/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.315 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:02.315 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.315 [55/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:02.315 [56/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.315 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.315 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.575 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:02.575 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.575 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.575 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.575 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:02.575 [64/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:02.575 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.575 [66/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.575 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.575 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.575 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:02.575 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.575 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.575 [72/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.575 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:02.575 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.575 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:02.575 [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:02.575 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.575 [78/740] Generating lib/rte_eal_def with a custom command 00:02:02.836 [79/740] Generating lib/rte_eal_mingw with a custom command 00:02:02.836 [80/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:02.836 [81/740] Generating lib/rte_ring_def with a custom command 00:02:02.836 [82/740] Generating lib/rte_ring_mingw with a custom command 00:02:02.836 [83/740] Generating lib/rte_rcu_def with a custom command 00:02:02.836 [84/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.836 [85/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.836 [86/740] Generating lib/rte_rcu_mingw with a custom command 00:02:02.836 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.836 [88/740] Linking static target lib/librte_ring.a 00:02:02.836 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.836 [90/740] Generating lib/rte_mempool_def with a custom command 00:02:02.836 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:02:02.836 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:03.097 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:03.097 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.097 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:03.097 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:03.097 [97/740] Generating lib/rte_mbuf_def with a custom command 00:02:03.097 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.097 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:03.097 [100/740] Linking static target lib/librte_eal.a 00:02:03.097 [101/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:03.357 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.357 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.357 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.357 [105/740] Linking static target lib/librte_rcu.a 00:02:03.357 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.357 [107/740] Linking static target lib/librte_mempool.a 00:02:03.617 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.617 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:03.617 [110/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.617 [111/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:03.617 [112/740] Generating lib/rte_net_def with a custom command 00:02:03.617 [113/740] Generating lib/rte_net_mingw with a custom command 00:02:03.617 [114/740] Generating lib/rte_meter_def with a custom command 00:02:03.617 [115/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.617 [116/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:03.617 [117/740] Generating lib/rte_meter_mingw with a custom command 00:02:03.617 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:03.617 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:03.617 [120/740] Linking static target lib/librte_meter.a 00:02:03.617 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:03.877 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:03.877 [123/740] Linking static target lib/librte_net.a 00:02:03.877 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.877 [125/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.137 [126/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:04.137 [127/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.137 [128/740] Linking static target lib/librte_mbuf.a 00:02:04.137 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.138 [130/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.138 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.138 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.138 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.398 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:04.398 [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:04.398 [136/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.398 [137/740] Generating lib/rte_ethdev_def with a custom command 00:02:04.398 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:04.398 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:04.659 [140/740] Generating lib/rte_pci_def with a custom command 00:02:04.659 [141/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:04.659 [142/740] Generating lib/rte_pci_mingw with a custom command 00:02:04.659 [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:04.659 [144/740] Linking static target lib/librte_pci.a 00:02:04.659 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:04.659 [146/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:04.659 [147/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:04.659 [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:04.659 [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.659 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:04.919 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:04.919 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:04.919 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:04.919 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:04.919 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:04.919 [156/740] Generating lib/rte_cmdline_def with a custom command 00:02:04.919 [157/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:04.919 [158/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:04.919 [159/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:04.919 [160/740] Generating lib/rte_metrics_def with a custom command 00:02:04.919 [161/740] Generating lib/rte_metrics_mingw with a custom command 00:02:04.919 [162/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:04.919 [163/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:04.919 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:04.919 [165/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:04.919 [166/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:04.919 [167/740] Linking static target lib/librte_cmdline.a 00:02:04.919 [168/740] Generating lib/rte_hash_def with a custom command 00:02:04.919 [169/740] Generating lib/rte_hash_mingw with a custom command 00:02:05.179 [170/740] Generating lib/rte_timer_def with a custom command 00:02:05.179 [171/740] Generating lib/rte_timer_mingw with a custom command 00:02:05.180 [172/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:05.180 [173/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:05.180 [174/740] Linking static target lib/librte_metrics.a 00:02:05.180 [175/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:05.439 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:05.439 [177/740] Linking static target lib/librte_timer.a 00:02:05.439 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.698 [179/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.698 [180/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:05.698 [181/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:05.698 [182/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.698 [183/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:05.958 [184/740] Generating lib/rte_acl_def with a custom command 00:02:05.958 [185/740] Generating lib/rte_acl_mingw with a custom command 00:02:05.958 [186/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:05.958 [187/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.958 [188/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:05.958 [189/740] Generating lib/rte_bbdev_def with a custom command 00:02:05.958 [190/740] Linking static target lib/librte_ethdev.a 00:02:05.958 [191/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:05.958 [192/740] Generating lib/rte_bitratestats_def with a custom command 00:02:05.958 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:06.528 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:06.528 [195/740] Linking static target lib/librte_bitratestats.a 00:02:06.528 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:06.528 [197/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:06.528 [198/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:06.528 [199/740] Linking static target lib/librte_bbdev.a 00:02:06.528 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.788 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:06.788 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:07.047 [203/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:07.047 [204/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.047 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:07.307 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.307 [207/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:07.307 [208/740] Linking static target lib/librte_hash.a 00:02:07.566 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:07.566 [210/740] Generating lib/rte_bpf_def with a custom command 00:02:07.567 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:02:07.567 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:07.567 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:02:07.567 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:07.827 [215/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:07.827 [216/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:07.827 [217/740] Linking static target lib/librte_cfgfile.a 00:02:07.827 [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:07.827 [219/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.827 [220/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:07.827 [221/740] Generating lib/rte_compressdev_def with a custom command 00:02:07.827 [222/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:07.827 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:07.827 [224/740] Linking static target lib/librte_bpf.a 00:02:08.087 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.087 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.087 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:02:08.087 [228/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:08.087 [229/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.087 [230/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.087 [231/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:08.346 [232/740] Linking static target lib/librte_acl.a 00:02:08.346 [233/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.346 [234/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.346 [235/740] Linking static target lib/librte_compressdev.a 00:02:08.346 [236/740] Generating lib/rte_distributor_def with a custom command 00:02:08.346 [237/740] Generating lib/rte_distributor_mingw with a custom command 00:02:08.346 [238/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.346 [239/740] Generating lib/rte_efd_def with a custom command 00:02:08.346 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:08.346 [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.635 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:08.635 [243/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:08.635 [244/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.635 [245/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:08.635 [246/740] Linking target lib/librte_eal.so.23.0 00:02:08.911 [247/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:08.911 [248/740] Linking static target lib/librte_distributor.a 00:02:08.911 [249/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:08.911 [250/740] Linking target lib/librte_ring.so.23.0 00:02:08.911 [251/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:08.911 [252/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.911 [253/740] Linking target lib/librte_meter.so.23.0 00:02:08.911 [254/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.911 [255/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:08.911 [256/740] Linking target lib/librte_pci.so.23.0 00:02:08.911 [257/740] Linking target lib/librte_timer.so.23.0 00:02:08.911 [258/740] Linking target lib/librte_rcu.so.23.0 00:02:08.911 [259/740] Linking target lib/librte_mempool.so.23.0 00:02:09.170 [260/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:09.170 [261/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:09.170 [262/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:09.170 [263/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:09.170 [264/740] Linking target lib/librte_acl.so.23.0 00:02:09.170 [265/740] Linking target lib/librte_cfgfile.so.23.0 00:02:09.170 [266/740] Linking target lib/librte_mbuf.so.23.0 00:02:09.170 [267/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:09.170 [268/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:09.170 [269/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:09.428 [270/740] Linking target lib/librte_net.so.23.0 00:02:09.428 [271/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:09.428 [272/740] Linking target lib/librte_cmdline.so.23.0 00:02:09.428 [273/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:09.428 [274/740] Linking target lib/librte_hash.so.23.0 00:02:09.428 [275/740] Linking target lib/librte_bbdev.so.23.0 00:02:09.428 [276/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:09.428 [277/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:09.428 [278/740] Linking target lib/librte_compressdev.so.23.0 00:02:09.428 [279/740] Linking static target lib/librte_efd.a 00:02:09.428 [280/740] Linking target lib/librte_distributor.so.23.0 00:02:09.428 [281/740] Generating lib/rte_eventdev_def with a custom command 00:02:09.688 [282/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:09.688 [283/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:09.688 [284/740] Generating lib/rte_gpudev_def with a custom command 00:02:09.688 [285/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:09.688 [286/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:09.688 [287/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.688 [288/740] Linking target lib/librte_efd.so.23.0 00:02:09.688 [289/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.688 [290/740] Linking static target lib/librte_cryptodev.a 00:02:09.946 [291/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.946 [292/740] Linking target lib/librte_ethdev.so.23.0 00:02:09.946 [293/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:09.946 [294/740] Linking target lib/librte_metrics.so.23.0 00:02:10.204 [295/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:10.204 [296/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:10.204 [297/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:10.204 [298/740] Linking target lib/librte_bpf.so.23.0 00:02:10.204 [299/740] Generating lib/rte_gro_def with a custom command 00:02:10.204 [300/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:10.204 [301/740] Linking static target lib/librte_gpudev.a 00:02:10.204 [302/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:10.204 [303/740] Generating lib/rte_gro_mingw with a custom command 00:02:10.204 [304/740] Linking target lib/librte_bitratestats.so.23.0 00:02:10.204 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:10.204 [306/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:10.204 [307/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:10.204 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:10.463 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:10.722 [310/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:10.722 [311/740] Generating lib/rte_gso_def with a custom command 00:02:10.722 [312/740] Generating lib/rte_gso_mingw with a custom command 00:02:10.722 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:10.722 [314/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:10.722 [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:10.722 [316/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:10.722 [317/740] Linking static target lib/librte_gro.a 00:02:10.722 [318/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:10.722 [319/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:10.722 [320/740] Linking static target lib/librte_gso.a 00:02:10.722 [321/740] Linking static target lib/librte_eventdev.a 00:02:10.981 [322/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.981 [323/740] Linking target lib/librte_gpudev.so.23.0 00:02:10.981 [324/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.981 [325/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.981 [326/740] Linking target lib/librte_gso.so.23.0 00:02:10.981 [327/740] Linking target lib/librte_gro.so.23.0 00:02:10.981 [328/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:10.981 [329/740] Generating lib/rte_ip_frag_def with a custom command 00:02:10.981 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:10.981 [331/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:10.981 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:10.981 [333/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:10.981 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:11.240 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:11.240 [336/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:11.240 [337/740] Linking static target lib/librte_jobstats.a 00:02:11.240 [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:11.240 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:11.240 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:11.240 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:11.240 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:11.499 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.499 [344/740] Linking target lib/librte_jobstats.so.23.0 00:02:11.499 [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:11.499 [346/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:11.499 [347/740] Linking static target lib/librte_ip_frag.a 00:02:11.499 [348/740] Linking static target lib/librte_latencystats.a 00:02:11.499 [349/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:11.499 [350/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.759 [351/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:11.759 [352/740] Linking target lib/librte_cryptodev.so.23.0 00:02:11.759 [353/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:11.759 [354/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:11.759 [355/740] Generating lib/rte_member_def with a custom command 00:02:11.759 [356/740] Generating lib/rte_member_mingw with a custom command 00:02:11.759 [357/740] Generating lib/rte_pcapng_def with a custom command 00:02:11.759 [358/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:11.759 [359/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.759 [360/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.759 [361/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:11.759 [362/740] Linking target lib/librte_latencystats.so.23.0 00:02:11.759 [363/740] Linking target lib/librte_ip_frag.so.23.0 00:02:11.759 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:11.759 [365/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:12.017 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:12.017 [367/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:12.017 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:12.017 [369/740] Linking static target lib/librte_lpm.a 00:02:12.017 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:12.276 [371/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:12.276 [372/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:12.276 [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:12.276 [374/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:12.276 [375/740] Generating lib/rte_power_def with a custom command 00:02:12.276 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:12.276 [377/740] Generating lib/rte_rawdev_def with a custom command 00:02:12.276 [378/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:12.276 [379/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.276 [380/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:12.276 [381/740] Linking target lib/librte_lpm.so.23.0 00:02:12.276 [382/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:12.276 [383/740] Linking static target lib/librte_pcapng.a 00:02:12.276 [384/740] Generating lib/rte_regexdev_def with a custom command 00:02:12.276 [385/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.276 [386/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:12.536 [387/740] Linking target lib/librte_eventdev.so.23.0 00:02:12.536 [388/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:12.536 [389/740] Generating lib/rte_dmadev_def with a custom command 00:02:12.536 [390/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:12.536 [391/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:12.536 [392/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:12.536 [393/740] Generating lib/rte_rib_def with a custom command 00:02:12.536 [394/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:12.536 [395/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:12.536 [396/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.536 [397/740] Linking static target lib/librte_rawdev.a 00:02:12.536 [398/740] Generating lib/rte_rib_mingw with a custom command 00:02:12.536 [399/740] Generating lib/rte_reorder_def with a custom command 00:02:12.536 [400/740] Linking target lib/librte_pcapng.so.23.0 00:02:12.536 [401/740] Generating lib/rte_reorder_mingw with a custom command 00:02:12.795 [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:12.795 [403/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:12.795 [404/740] Linking static target lib/librte_dmadev.a 00:02:12.795 [405/740] Linking static target lib/librte_power.a 00:02:12.795 [406/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:12.795 [407/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:12.795 [408/740] Linking static target lib/librte_regexdev.a 00:02:12.795 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:12.795 [410/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:13.055 [411/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.055 [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:13.055 [413/740] Generating lib/rte_sched_def with a custom command 00:02:13.055 [414/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:13.055 [415/740] Linking target lib/librte_rawdev.so.23.0 00:02:13.055 [416/740] Generating lib/rte_sched_mingw with a custom command 00:02:13.055 [417/740] Generating lib/rte_security_def with a custom command 00:02:13.055 [418/740] Generating lib/rte_security_mingw with a custom command 00:02:13.055 [419/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:13.055 [420/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:13.055 [421/740] Linking static target lib/librte_member.a 00:02:13.055 [422/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.055 [423/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.055 [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:13.055 [425/740] Linking static target lib/librte_reorder.a 00:02:13.055 [426/740] Linking target lib/librte_dmadev.so.23.0 00:02:13.055 [427/740] Generating lib/rte_stack_def with a custom command 00:02:13.315 [428/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:13.315 [429/740] Generating lib/rte_stack_mingw with a custom command 00:02:13.315 [430/740] Linking static target lib/librte_stack.a 00:02:13.315 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:13.315 [432/740] Linking static target lib/librte_rib.a 00:02:13.315 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:13.315 [434/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.315 [435/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.315 [436/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.315 [437/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.315 [438/740] Linking target lib/librte_regexdev.so.23.0 00:02:13.315 [439/740] Linking target lib/librte_reorder.so.23.0 00:02:13.315 [440/740] Linking target lib/librte_stack.so.23.0 00:02:13.315 [441/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.574 [442/740] Linking target lib/librte_member.so.23.0 00:02:13.574 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.574 [444/740] Linking target lib/librte_power.so.23.0 00:02:13.574 [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.574 [446/740] Linking static target lib/librte_security.a 00:02:13.574 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.574 [448/740] Linking target lib/librte_rib.so.23.0 00:02:13.834 [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:13.834 [450/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.834 [451/740] Generating lib/rte_vhost_def with a custom command 00:02:13.834 [452/740] Generating lib/rte_vhost_mingw with a custom command 00:02:13.834 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.834 [454/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.834 [455/740] Linking target lib/librte_security.so.23.0 00:02:13.834 [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:14.093 [457/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:14.094 [458/740] Linking static target lib/librte_sched.a 00:02:14.094 [459/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:14.353 [460/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.353 [461/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:14.353 [462/740] Linking target lib/librte_sched.so.23.0 00:02:14.353 [463/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:14.353 [464/740] Generating lib/rte_ipsec_def with a custom command 00:02:14.353 [465/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:14.353 [466/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:14.612 [467/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:14.612 [468/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.612 [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.612 [470/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:14.612 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:14.612 [472/740] Generating lib/rte_fib_def with a custom command 00:02:14.613 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:14.872 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:15.132 [475/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:15.132 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:15.132 [477/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:15.132 [478/740] Linking static target lib/librte_ipsec.a 00:02:15.132 [479/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:15.391 [480/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.391 [481/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:15.391 [482/740] Linking target lib/librte_ipsec.so.23.0 00:02:15.391 [483/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:15.651 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:15.651 [485/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:15.651 [486/740] Linking static target lib/librte_fib.a 00:02:15.651 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:15.651 [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:15.651 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:15.911 [490/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.911 [491/740] Linking target lib/librte_fib.so.23.0 00:02:16.171 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:16.171 [493/740] Generating lib/rte_port_def with a custom command 00:02:16.171 [494/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:16.171 [495/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:16.171 [496/740] Generating lib/rte_port_mingw with a custom command 00:02:16.171 [497/740] Generating lib/rte_pdump_def with a custom command 00:02:16.171 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:02:16.171 [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:16.430 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:16.430 [501/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:16.430 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:16.430 [503/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:16.430 [504/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:16.690 [505/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:16.690 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:16.690 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:16.690 [508/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:16.949 [509/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:16.949 [510/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:16.949 [511/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:16.949 [512/740] Linking static target lib/librte_port.a 00:02:16.949 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:16.949 [514/740] Linking static target lib/librte_pdump.a 00:02:17.208 [515/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.468 [516/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:17.468 [517/740] Linking target lib/librte_pdump.so.23.0 00:02:17.468 [518/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:17.468 [519/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.468 [520/740] Generating lib/rte_table_def with a custom command 00:02:17.468 [521/740] Generating lib/rte_table_mingw with a custom command 00:02:17.468 [522/740] Linking target lib/librte_port.so.23.0 00:02:17.728 [523/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:17.729 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:17.729 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:17.729 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:17.729 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:17.729 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:17.729 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:17.729 [530/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:17.729 [531/740] Linking static target lib/librte_table.a 00:02:17.729 [532/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:17.988 [533/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:18.248 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:18.248 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:18.248 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.248 [537/740] Linking target lib/librte_table.so.23.0 00:02:18.248 [538/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:18.507 [539/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.507 [540/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:18.508 [541/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:18.508 [542/740] Generating lib/rte_graph_def with a custom command 00:02:18.508 [543/740] Generating lib/rte_graph_mingw with a custom command 00:02:18.508 [544/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:18.767 [545/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:18.767 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:18.767 [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:18.767 [548/740] Linking static target lib/librte_graph.a 00:02:18.767 [549/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:19.027 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:19.027 [551/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:19.027 [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:19.284 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:19.285 [554/740] Generating lib/rte_node_def with a custom command 00:02:19.285 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:19.285 [556/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:19.285 [557/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.285 [558/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:19.285 [559/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:19.285 [560/740] Linking target lib/librte_graph.so.23.0 00:02:19.543 [561/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:19.543 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:19.543 [563/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:19.543 [564/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:19.543 [565/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:19.543 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:19.543 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:19.543 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:19.543 [569/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:19.543 [570/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:19.807 [571/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:19.807 [572/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:19.807 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:19.807 [574/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:19.807 [575/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:19.807 [576/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:19.807 [577/740] Linking static target lib/librte_node.a 00:02:19.807 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:19.807 [579/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:19.807 [580/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:19.807 [581/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:19.807 [582/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.067 [583/740] Linking static target drivers/librte_bus_vdev.a 00:02:20.067 [584/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.067 [585/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:20.067 [586/740] Linking target lib/librte_node.so.23.0 00:02:20.067 [587/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.067 [588/740] Linking static target drivers/librte_bus_pci.a 00:02:20.067 [589/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:20.067 [590/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:20.067 [591/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.067 [592/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:20.325 [593/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:20.325 [594/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.325 [595/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:20.325 [596/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:20.325 [597/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:20.325 [598/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:20.584 [599/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.584 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:20.584 [601/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.584 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:20.584 [603/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:20.584 [604/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.584 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:20.584 [606/740] Linking static target drivers/librte_mempool_ring.a 00:02:20.584 [607/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:20.844 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:21.104 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:21.365 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:21.365 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:21.365 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:21.937 [613/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:21.937 [614/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:21.937 [615/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:21.937 [616/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:22.197 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:22.197 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:22.197 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:22.197 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:22.197 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:22.766 [622/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:23.337 [623/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:23.337 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:23.337 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:23.337 [626/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:23.337 [627/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:23.597 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:23.597 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:23.597 [630/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:23.597 [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:23.867 [632/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:23.867 [633/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:24.151 [634/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:24.151 [635/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:24.151 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:24.428 [637/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:24.428 [638/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:24.428 [639/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:24.428 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:24.428 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:24.687 [642/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:24.687 [643/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:24.946 [644/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:24.946 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:24.946 [646/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:24.946 [647/740] Linking static target drivers/librte_net_i40e.a 00:02:24.946 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:24.946 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:24.946 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:25.205 [651/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:25.205 [652/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:25.205 [653/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.464 [654/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:25.464 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:25.464 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:25.464 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:25.724 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:25.724 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:25.724 [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:25.724 [661/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.724 [662/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:25.724 [663/740] Linking static target lib/librte_vhost.a 00:02:25.724 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:25.981 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:26.241 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:26.241 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:26.241 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:26.500 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:26.758 [670/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.758 [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:26.758 [672/740] Linking target lib/librte_vhost.so.23.0 00:02:26.758 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:27.017 [674/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:27.017 [675/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:27.017 [676/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:27.017 [677/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:27.276 [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:27.276 [679/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:27.276 [680/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:27.535 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:27.535 [682/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:27.535 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:27.535 [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:27.535 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:27.793 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:27.793 [687/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:27.793 [688/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:27.793 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:27.793 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:28.052 [691/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:28.052 [692/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:28.311 [693/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:28.311 [694/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:28.311 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:28.569 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:28.829 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:28.829 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:28.829 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:28.829 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:29.089 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:29.348 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:29.348 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:29.348 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:29.608 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:29.608 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:29.608 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:29.867 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:30.126 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:30.386 [710/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:30.386 [711/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:30.386 [712/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:30.386 [713/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:30.386 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:30.645 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:30.645 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:30.645 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:30.904 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:31.163 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:32.543 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:32.543 [721/740] Linking static target lib/librte_pipeline.a 00:02:33.109 [722/740] Linking target app/dpdk-test-compress-perf 00:02:33.109 [723/740] Linking target app/dpdk-pdump 00:02:33.109 [724/740] Linking target app/dpdk-test-cmdline 00:02:33.109 [725/740] Linking target app/dpdk-test-acl 00:02:33.109 [726/740] Linking target app/dpdk-test-bbdev 00:02:33.109 [727/740] Linking target app/dpdk-test-eventdev 00:02:33.109 [728/740] Linking target app/dpdk-dumpcap 00:02:33.109 [729/740] Linking target app/dpdk-test-crypto-perf 00:02:33.109 [730/740] Linking target app/dpdk-proc-info 00:02:33.368 [731/740] Linking target app/dpdk-test-fib 00:02:33.368 [732/740] Linking target app/dpdk-test-flow-perf 00:02:33.368 [733/740] Linking target app/dpdk-test-gpudev 00:02:33.368 [734/740] Linking target app/dpdk-test-sad 00:02:33.368 [735/740] Linking target app/dpdk-test-pipeline 00:02:33.368 [736/740] Linking target app/dpdk-testpmd 00:02:33.368 [737/740] Linking target app/dpdk-test-security-perf 00:02:33.368 [738/740] Linking target app/dpdk-test-regex 00:02:37.580 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.580 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:37.580 01:46:42 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:02:37.580 01:46:42 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:37.580 01:46:42 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:37.580 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:37.580 [0/1] Installing files. 00:02:37.842 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:37.842 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.843 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:37.844 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:37.845 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.107 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:38.108 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:38.109 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:38.109 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.109 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.110 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.111 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.111 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.111 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.111 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:38.111 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.111 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.374 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.375 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.376 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:38.377 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:38.377 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:38.377 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:38.377 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:38.377 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:38.377 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:38.377 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:38.377 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:38.377 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:38.377 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:38.377 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:38.377 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:38.377 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:38.377 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:38.377 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:38.377 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:38.377 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:38.377 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:38.377 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:38.377 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:38.377 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:38.377 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:38.377 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:38.377 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:38.377 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:38.377 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:38.377 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:38.377 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:38.377 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:38.377 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:38.377 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:38.377 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:38.377 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:38.377 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:38.377 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:38.377 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:38.377 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:38.377 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:38.377 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:38.377 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:38.377 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:38.377 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:38.377 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:38.377 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:38.377 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:38.377 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:38.377 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:38.377 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:38.377 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:38.377 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:38.377 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:38.377 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:38.377 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:38.377 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:38.377 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:38.377 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:38.377 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:38.377 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:38.377 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:38.377 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:38.377 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:38.377 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:38.377 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:38.377 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:38.377 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:38.377 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:38.377 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:38.377 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:38.377 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:38.377 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:38.377 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:38.377 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:38.377 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:38.377 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:38.377 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:38.377 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:38.377 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:38.377 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:38.377 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:38.377 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:38.377 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:38.377 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:38.377 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:38.377 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:38.377 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:38.377 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:38.378 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:38.378 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:38.378 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:38.378 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:38.378 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:38.378 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:38.378 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:38.378 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:38.378 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:38.378 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:38.378 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:38.378 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:38.378 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:38.378 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:38.378 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:38.378 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:38.378 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:38.378 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:38.378 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:38.378 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:38.378 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:38.378 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:38.378 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:38.378 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:38.378 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:38.378 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:38.378 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:38.378 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:38.378 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:38.378 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:38.378 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:38.378 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:38.378 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:38.378 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:38.378 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:38.378 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:38.378 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:38.378 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:38.378 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:38.378 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:38.378 01:46:43 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:02:38.378 01:46:43 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:38.378 00:02:38.378 real 0m44.104s 00:02:38.378 user 4m15.385s 00:02:38.378 sys 0m48.870s 00:02:38.378 01:46:43 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:38.378 01:46:43 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:38.378 ************************************ 00:02:38.378 END TEST build_native_dpdk 00:02:38.378 ************************************ 00:02:38.378 01:46:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:38.378 01:46:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:38.378 01:46:43 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:38.378 01:46:43 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:38.378 01:46:43 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:38.378 01:46:43 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:38.378 01:46:43 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:38.378 01:46:43 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:38.638 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:38.898 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:38.898 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:38.898 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:39.158 Using 'verbs' RDMA provider 00:02:55.455 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:10.434 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:11.007 Creating mk/config.mk...done. 00:03:11.007 Creating mk/cc.flags.mk...done. 00:03:11.007 Type 'make' to build. 00:03:11.007 01:47:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:11.007 01:47:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:11.007 01:47:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:11.007 01:47:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.007 ************************************ 00:03:11.007 START TEST make 00:03:11.007 ************************************ 00:03:11.007 01:47:16 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:11.267 make[1]: Nothing to be done for 'all'. 00:03:57.959 CC lib/log/log_flags.o 00:03:57.959 CC lib/log/log.o 00:03:57.959 CC lib/ut/ut.o 00:03:57.959 CC lib/log/log_deprecated.o 00:03:57.959 CC lib/ut_mock/mock.o 00:03:57.959 LIB libspdk_log.a 00:03:57.959 LIB libspdk_ut_mock.a 00:03:57.959 LIB libspdk_ut.a 00:03:57.959 SO libspdk_ut_mock.so.6.0 00:03:57.959 SO libspdk_ut.so.2.0 00:03:57.959 SO libspdk_log.so.7.0 00:03:57.959 SYMLINK libspdk_ut_mock.so 00:03:57.959 SYMLINK libspdk_log.so 00:03:57.959 SYMLINK libspdk_ut.so 00:03:57.959 CC lib/util/base64.o 00:03:57.959 CC lib/util/crc16.o 00:03:57.959 CC lib/util/crc32.o 00:03:57.959 CC lib/util/crc32c.o 00:03:57.959 CC lib/util/bit_array.o 00:03:57.959 CC lib/util/cpuset.o 00:03:57.959 CC lib/ioat/ioat.o 00:03:57.959 CC lib/dma/dma.o 00:03:57.959 CXX lib/trace_parser/trace.o 00:03:57.959 CC lib/vfio_user/host/vfio_user_pci.o 00:03:57.959 CC lib/util/crc32_ieee.o 00:03:57.959 CC lib/util/crc64.o 00:03:57.959 CC lib/util/dif.o 00:03:57.959 CC lib/vfio_user/host/vfio_user.o 00:03:57.959 CC lib/util/fd.o 00:03:57.959 LIB libspdk_dma.a 00:03:57.959 SO libspdk_dma.so.5.0 00:03:57.959 CC lib/util/fd_group.o 00:03:57.959 CC lib/util/file.o 00:03:57.959 SYMLINK libspdk_dma.so 00:03:57.959 CC lib/util/hexlify.o 00:03:57.959 CC lib/util/iov.o 00:03:57.959 LIB libspdk_ioat.a 00:03:57.959 SO libspdk_ioat.so.7.0 00:03:57.959 CC lib/util/math.o 00:03:57.959 CC lib/util/net.o 00:03:57.959 SYMLINK libspdk_ioat.so 00:03:57.959 CC lib/util/pipe.o 00:03:57.959 LIB libspdk_vfio_user.a 00:03:57.959 CC lib/util/strerror_tls.o 00:03:57.959 SO libspdk_vfio_user.so.5.0 00:03:57.959 CC lib/util/string.o 00:03:57.959 CC lib/util/uuid.o 00:03:57.959 SYMLINK libspdk_vfio_user.so 00:03:57.959 CC lib/util/xor.o 00:03:57.959 CC lib/util/zipf.o 00:03:57.959 CC lib/util/md5.o 00:03:57.959 LIB libspdk_util.a 00:03:57.959 SO libspdk_util.so.10.0 00:03:57.959 LIB libspdk_trace_parser.a 00:03:57.959 SO libspdk_trace_parser.so.6.0 00:03:57.959 SYMLINK libspdk_util.so 00:03:57.959 SYMLINK libspdk_trace_parser.so 00:03:57.959 CC lib/rdma_utils/rdma_utils.o 00:03:57.959 CC lib/env_dpdk/env.o 00:03:57.959 CC lib/env_dpdk/memory.o 00:03:57.959 CC lib/env_dpdk/pci.o 00:03:57.959 CC lib/json/json_parse.o 00:03:57.959 CC lib/env_dpdk/init.o 00:03:57.959 CC lib/rdma_provider/common.o 00:03:57.959 CC lib/vmd/vmd.o 00:03:57.959 CC lib/conf/conf.o 00:03:57.959 CC lib/idxd/idxd.o 00:03:57.959 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:57.959 LIB libspdk_conf.a 00:03:57.959 CC lib/json/json_util.o 00:03:57.959 SO libspdk_conf.so.6.0 00:03:57.959 LIB libspdk_rdma_utils.a 00:03:57.959 SO libspdk_rdma_utils.so.1.0 00:03:57.959 SYMLINK libspdk_conf.so 00:03:57.959 CC lib/idxd/idxd_user.o 00:03:57.959 CC lib/env_dpdk/threads.o 00:03:57.959 SYMLINK libspdk_rdma_utils.so 00:03:57.959 CC lib/vmd/led.o 00:03:57.959 LIB libspdk_rdma_provider.a 00:03:57.959 CC lib/env_dpdk/pci_ioat.o 00:03:57.959 CC lib/env_dpdk/pci_virtio.o 00:03:57.959 SO libspdk_rdma_provider.so.6.0 00:03:57.959 SYMLINK libspdk_rdma_provider.so 00:03:57.959 CC lib/env_dpdk/pci_vmd.o 00:03:57.959 CC lib/json/json_write.o 00:03:57.959 CC lib/env_dpdk/pci_idxd.o 00:03:57.959 CC lib/idxd/idxd_kernel.o 00:03:57.959 CC lib/env_dpdk/pci_event.o 00:03:57.959 CC lib/env_dpdk/sigbus_handler.o 00:03:57.959 CC lib/env_dpdk/pci_dpdk.o 00:03:57.959 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:57.959 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:57.959 LIB libspdk_idxd.a 00:03:57.959 LIB libspdk_vmd.a 00:03:57.959 SO libspdk_idxd.so.12.1 00:03:57.959 SO libspdk_vmd.so.6.0 00:03:57.959 LIB libspdk_json.a 00:03:57.959 SYMLINK libspdk_idxd.so 00:03:57.959 SO libspdk_json.so.6.0 00:03:57.959 SYMLINK libspdk_vmd.so 00:03:57.959 SYMLINK libspdk_json.so 00:03:57.959 CC lib/jsonrpc/jsonrpc_server.o 00:03:57.959 CC lib/jsonrpc/jsonrpc_client.o 00:03:57.959 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:57.959 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:57.959 LIB libspdk_jsonrpc.a 00:03:57.959 LIB libspdk_env_dpdk.a 00:03:57.959 SO libspdk_jsonrpc.so.6.0 00:03:57.959 SO libspdk_env_dpdk.so.15.0 00:03:57.959 SYMLINK libspdk_jsonrpc.so 00:03:57.959 SYMLINK libspdk_env_dpdk.so 00:03:57.959 CC lib/rpc/rpc.o 00:03:57.959 LIB libspdk_rpc.a 00:03:57.959 SO libspdk_rpc.so.6.0 00:03:57.959 SYMLINK libspdk_rpc.so 00:03:57.959 CC lib/keyring/keyring.o 00:03:57.959 CC lib/keyring/keyring_rpc.o 00:03:57.959 CC lib/trace/trace.o 00:03:57.959 CC lib/trace/trace_flags.o 00:03:57.959 CC lib/trace/trace_rpc.o 00:03:57.959 CC lib/notify/notify.o 00:03:57.959 CC lib/notify/notify_rpc.o 00:03:57.959 LIB libspdk_notify.a 00:03:57.959 SO libspdk_notify.so.6.0 00:03:57.959 LIB libspdk_keyring.a 00:03:57.959 SYMLINK libspdk_notify.so 00:03:57.959 LIB libspdk_trace.a 00:03:57.959 SO libspdk_keyring.so.2.0 00:03:57.959 SO libspdk_trace.so.11.0 00:03:57.959 SYMLINK libspdk_keyring.so 00:03:57.959 SYMLINK libspdk_trace.so 00:03:58.219 CC lib/thread/thread.o 00:03:58.219 CC lib/thread/iobuf.o 00:03:58.219 CC lib/sock/sock.o 00:03:58.219 CC lib/sock/sock_rpc.o 00:03:58.479 LIB libspdk_sock.a 00:03:58.479 SO libspdk_sock.so.10.0 00:03:58.738 SYMLINK libspdk_sock.so 00:03:58.998 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:58.998 CC lib/nvme/nvme_ctrlr.o 00:03:58.998 CC lib/nvme/nvme_fabric.o 00:03:58.998 CC lib/nvme/nvme_ns_cmd.o 00:03:58.998 CC lib/nvme/nvme_ns.o 00:03:58.998 CC lib/nvme/nvme_pcie_common.o 00:03:58.998 CC lib/nvme/nvme_pcie.o 00:03:58.998 CC lib/nvme/nvme.o 00:03:58.998 CC lib/nvme/nvme_qpair.o 00:03:59.566 LIB libspdk_thread.a 00:03:59.566 SO libspdk_thread.so.10.1 00:03:59.566 CC lib/nvme/nvme_quirks.o 00:03:59.566 CC lib/nvme/nvme_transport.o 00:03:59.824 SYMLINK libspdk_thread.so 00:03:59.824 CC lib/nvme/nvme_discovery.o 00:03:59.824 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:59.824 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:59.824 CC lib/nvme/nvme_tcp.o 00:03:59.824 CC lib/nvme/nvme_opal.o 00:03:59.824 CC lib/accel/accel.o 00:04:00.081 CC lib/accel/accel_rpc.o 00:04:00.081 CC lib/accel/accel_sw.o 00:04:00.339 CC lib/nvme/nvme_io_msg.o 00:04:00.339 CC lib/nvme/nvme_poll_group.o 00:04:00.339 CC lib/nvme/nvme_zns.o 00:04:00.339 CC lib/nvme/nvme_stubs.o 00:04:00.339 CC lib/nvme/nvme_auth.o 00:04:00.339 CC lib/nvme/nvme_cuse.o 00:04:00.598 CC lib/blob/blobstore.o 00:04:00.855 CC lib/blob/request.o 00:04:00.855 CC lib/blob/zeroes.o 00:04:00.855 CC lib/nvme/nvme_rdma.o 00:04:00.855 CC lib/blob/blob_bs_dev.o 00:04:01.113 LIB libspdk_accel.a 00:04:01.113 SO libspdk_accel.so.16.0 00:04:01.113 CC lib/init/json_config.o 00:04:01.113 CC lib/virtio/virtio.o 00:04:01.113 SYMLINK libspdk_accel.so 00:04:01.113 CC lib/virtio/virtio_vhost_user.o 00:04:01.371 CC lib/fsdev/fsdev.o 00:04:01.371 CC lib/fsdev/fsdev_io.o 00:04:01.371 CC lib/fsdev/fsdev_rpc.o 00:04:01.371 CC lib/init/subsystem.o 00:04:01.371 CC lib/virtio/virtio_vfio_user.o 00:04:01.371 CC lib/init/subsystem_rpc.o 00:04:01.371 CC lib/bdev/bdev.o 00:04:01.630 CC lib/init/rpc.o 00:04:01.630 CC lib/virtio/virtio_pci.o 00:04:01.630 CC lib/bdev/bdev_rpc.o 00:04:01.630 CC lib/bdev/bdev_zone.o 00:04:01.630 CC lib/bdev/part.o 00:04:01.630 CC lib/bdev/scsi_nvme.o 00:04:01.630 LIB libspdk_init.a 00:04:01.630 SO libspdk_init.so.6.0 00:04:01.889 SYMLINK libspdk_init.so 00:04:01.889 LIB libspdk_virtio.a 00:04:01.889 SO libspdk_virtio.so.7.0 00:04:01.889 LIB libspdk_fsdev.a 00:04:01.889 SYMLINK libspdk_virtio.so 00:04:01.889 SO libspdk_fsdev.so.1.0 00:04:01.889 CC lib/event/app.o 00:04:01.889 CC lib/event/app_rpc.o 00:04:01.889 CC lib/event/reactor.o 00:04:01.889 CC lib/event/log_rpc.o 00:04:01.889 CC lib/event/scheduler_static.o 00:04:02.148 SYMLINK libspdk_fsdev.so 00:04:02.148 LIB libspdk_nvme.a 00:04:02.407 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:02.407 SO libspdk_nvme.so.14.0 00:04:02.407 LIB libspdk_event.a 00:04:02.666 SO libspdk_event.so.14.0 00:04:02.666 SYMLINK libspdk_nvme.so 00:04:02.666 SYMLINK libspdk_event.so 00:04:02.926 LIB libspdk_fuse_dispatcher.a 00:04:02.926 SO libspdk_fuse_dispatcher.so.1.0 00:04:02.926 SYMLINK libspdk_fuse_dispatcher.so 00:04:03.865 LIB libspdk_blob.a 00:04:04.125 SO libspdk_blob.so.11.0 00:04:04.125 SYMLINK libspdk_blob.so 00:04:04.125 LIB libspdk_bdev.a 00:04:04.385 SO libspdk_bdev.so.16.0 00:04:04.385 SYMLINK libspdk_bdev.so 00:04:04.385 CC lib/blobfs/blobfs.o 00:04:04.385 CC lib/blobfs/tree.o 00:04:04.385 CC lib/lvol/lvol.o 00:04:04.644 CC lib/ftl/ftl_core.o 00:04:04.644 CC lib/ftl/ftl_init.o 00:04:04.644 CC lib/ftl/ftl_layout.o 00:04:04.644 CC lib/nvmf/ctrlr.o 00:04:04.644 CC lib/scsi/dev.o 00:04:04.644 CC lib/ublk/ublk.o 00:04:04.644 CC lib/nbd/nbd.o 00:04:04.644 CC lib/nvmf/ctrlr_discovery.o 00:04:04.644 CC lib/ftl/ftl_debug.o 00:04:04.644 CC lib/scsi/lun.o 00:04:04.903 CC lib/ftl/ftl_io.o 00:04:04.903 CC lib/ftl/ftl_sb.o 00:04:04.903 CC lib/ftl/ftl_l2p.o 00:04:04.903 CC lib/nbd/nbd_rpc.o 00:04:05.163 CC lib/scsi/port.o 00:04:05.163 CC lib/nvmf/ctrlr_bdev.o 00:04:05.163 CC lib/ftl/ftl_l2p_flat.o 00:04:05.163 CC lib/ftl/ftl_nv_cache.o 00:04:05.163 LIB libspdk_nbd.a 00:04:05.163 CC lib/ftl/ftl_band.o 00:04:05.163 SO libspdk_nbd.so.7.0 00:04:05.163 CC lib/ublk/ublk_rpc.o 00:04:05.163 CC lib/scsi/scsi.o 00:04:05.163 SYMLINK libspdk_nbd.so 00:04:05.163 CC lib/scsi/scsi_bdev.o 00:04:05.163 LIB libspdk_blobfs.a 00:04:05.163 CC lib/ftl/ftl_band_ops.o 00:04:05.423 SO libspdk_blobfs.so.10.0 00:04:05.423 CC lib/scsi/scsi_pr.o 00:04:05.423 LIB libspdk_ublk.a 00:04:05.423 SO libspdk_ublk.so.3.0 00:04:05.423 SYMLINK libspdk_blobfs.so 00:04:05.423 CC lib/ftl/ftl_writer.o 00:04:05.423 LIB libspdk_lvol.a 00:04:05.423 SYMLINK libspdk_ublk.so 00:04:05.423 CC lib/scsi/scsi_rpc.o 00:04:05.423 SO libspdk_lvol.so.10.0 00:04:05.423 SYMLINK libspdk_lvol.so 00:04:05.423 CC lib/scsi/task.o 00:04:05.423 CC lib/nvmf/subsystem.o 00:04:05.683 CC lib/nvmf/nvmf.o 00:04:05.683 CC lib/ftl/ftl_rq.o 00:04:05.683 CC lib/ftl/ftl_reloc.o 00:04:05.683 CC lib/ftl/ftl_l2p_cache.o 00:04:05.683 CC lib/ftl/ftl_p2l.o 00:04:05.683 LIB libspdk_scsi.a 00:04:05.683 SO libspdk_scsi.so.9.0 00:04:05.683 CC lib/ftl/ftl_p2l_log.o 00:04:05.683 CC lib/nvmf/nvmf_rpc.o 00:04:05.942 SYMLINK libspdk_scsi.so 00:04:05.942 CC lib/nvmf/transport.o 00:04:06.201 CC lib/ftl/mngt/ftl_mngt.o 00:04:06.201 CC lib/iscsi/conn.o 00:04:06.201 CC lib/iscsi/init_grp.o 00:04:06.201 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:06.461 CC lib/vhost/vhost.o 00:04:06.461 CC lib/iscsi/iscsi.o 00:04:06.461 CC lib/vhost/vhost_rpc.o 00:04:06.461 CC lib/nvmf/tcp.o 00:04:06.461 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:06.461 CC lib/nvmf/stubs.o 00:04:06.720 CC lib/iscsi/param.o 00:04:06.720 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:06.720 CC lib/nvmf/mdns_server.o 00:04:06.720 CC lib/nvmf/rdma.o 00:04:06.720 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:06.979 CC lib/nvmf/auth.o 00:04:06.979 CC lib/iscsi/portal_grp.o 00:04:06.979 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:06.979 CC lib/iscsi/tgt_node.o 00:04:06.979 CC lib/vhost/vhost_scsi.o 00:04:07.238 CC lib/iscsi/iscsi_subsystem.o 00:04:07.238 CC lib/iscsi/iscsi_rpc.o 00:04:07.238 CC lib/iscsi/task.o 00:04:07.238 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:07.498 CC lib/vhost/vhost_blk.o 00:04:07.498 CC lib/vhost/rte_vhost_user.o 00:04:07.498 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:07.498 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:07.498 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:07.498 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:07.757 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:07.757 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:07.757 CC lib/ftl/utils/ftl_conf.o 00:04:07.757 CC lib/ftl/utils/ftl_md.o 00:04:08.016 LIB libspdk_iscsi.a 00:04:08.016 CC lib/ftl/utils/ftl_mempool.o 00:04:08.016 CC lib/ftl/utils/ftl_bitmap.o 00:04:08.016 CC lib/ftl/utils/ftl_property.o 00:04:08.016 SO libspdk_iscsi.so.8.0 00:04:08.016 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:08.016 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:08.016 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:08.276 SYMLINK libspdk_iscsi.so 00:04:08.276 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:08.276 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:08.276 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:08.276 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:08.276 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:08.276 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:08.276 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:08.276 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:08.276 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:08.536 LIB libspdk_vhost.a 00:04:08.536 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:08.536 CC lib/ftl/base/ftl_base_dev.o 00:04:08.536 CC lib/ftl/base/ftl_base_bdev.o 00:04:08.536 SO libspdk_vhost.so.8.0 00:04:08.536 CC lib/ftl/ftl_trace.o 00:04:08.536 SYMLINK libspdk_vhost.so 00:04:08.796 LIB libspdk_ftl.a 00:04:09.056 SO libspdk_ftl.so.9.0 00:04:09.056 LIB libspdk_nvmf.a 00:04:09.316 SYMLINK libspdk_ftl.so 00:04:09.316 SO libspdk_nvmf.so.19.0 00:04:09.576 SYMLINK libspdk_nvmf.so 00:04:09.834 CC module/env_dpdk/env_dpdk_rpc.o 00:04:09.834 CC module/accel/error/accel_error.o 00:04:09.834 CC module/keyring/linux/keyring.o 00:04:09.834 CC module/keyring/file/keyring.o 00:04:09.834 CC module/sock/posix/posix.o 00:04:09.834 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:09.834 CC module/accel/dsa/accel_dsa.o 00:04:09.834 CC module/accel/ioat/accel_ioat.o 00:04:09.834 CC module/fsdev/aio/fsdev_aio.o 00:04:09.834 CC module/blob/bdev/blob_bdev.o 00:04:10.092 LIB libspdk_env_dpdk_rpc.a 00:04:10.092 SO libspdk_env_dpdk_rpc.so.6.0 00:04:10.092 CC module/keyring/linux/keyring_rpc.o 00:04:10.092 CC module/keyring/file/keyring_rpc.o 00:04:10.092 SYMLINK libspdk_env_dpdk_rpc.so 00:04:10.092 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:10.092 CC module/accel/error/accel_error_rpc.o 00:04:10.092 CC module/accel/ioat/accel_ioat_rpc.o 00:04:10.092 LIB libspdk_scheduler_dynamic.a 00:04:10.092 SO libspdk_scheduler_dynamic.so.4.0 00:04:10.092 LIB libspdk_keyring_linux.a 00:04:10.092 LIB libspdk_keyring_file.a 00:04:10.354 SO libspdk_keyring_linux.so.1.0 00:04:10.354 SYMLINK libspdk_scheduler_dynamic.so 00:04:10.354 SO libspdk_keyring_file.so.2.0 00:04:10.354 LIB libspdk_blob_bdev.a 00:04:10.354 CC module/accel/dsa/accel_dsa_rpc.o 00:04:10.354 SO libspdk_blob_bdev.so.11.0 00:04:10.354 LIB libspdk_accel_error.a 00:04:10.354 LIB libspdk_accel_ioat.a 00:04:10.354 SYMLINK libspdk_keyring_linux.so 00:04:10.354 SO libspdk_accel_error.so.2.0 00:04:10.354 SO libspdk_accel_ioat.so.6.0 00:04:10.354 SYMLINK libspdk_keyring_file.so 00:04:10.354 SYMLINK libspdk_blob_bdev.so 00:04:10.354 SYMLINK libspdk_accel_error.so 00:04:10.354 CC module/fsdev/aio/linux_aio_mgr.o 00:04:10.354 SYMLINK libspdk_accel_ioat.so 00:04:10.354 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:10.354 CC module/accel/iaa/accel_iaa.o 00:04:10.354 CC module/accel/iaa/accel_iaa_rpc.o 00:04:10.354 LIB libspdk_accel_dsa.a 00:04:10.354 SO libspdk_accel_dsa.so.5.0 00:04:10.354 CC module/scheduler/gscheduler/gscheduler.o 00:04:10.354 SYMLINK libspdk_accel_dsa.so 00:04:10.612 LIB libspdk_scheduler_dpdk_governor.a 00:04:10.612 CC module/bdev/delay/vbdev_delay.o 00:04:10.613 CC module/blobfs/bdev/blobfs_bdev.o 00:04:10.613 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:10.613 LIB libspdk_accel_iaa.a 00:04:10.613 LIB libspdk_scheduler_gscheduler.a 00:04:10.613 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:10.613 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:10.613 SO libspdk_scheduler_gscheduler.so.4.0 00:04:10.613 SO libspdk_accel_iaa.so.3.0 00:04:10.613 CC module/bdev/error/vbdev_error.o 00:04:10.613 CC module/bdev/gpt/gpt.o 00:04:10.613 LIB libspdk_fsdev_aio.a 00:04:10.613 SYMLINK libspdk_accel_iaa.so 00:04:10.613 SO libspdk_fsdev_aio.so.1.0 00:04:10.613 CC module/bdev/gpt/vbdev_gpt.o 00:04:10.613 SYMLINK libspdk_scheduler_gscheduler.so 00:04:10.613 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:10.870 CC module/bdev/lvol/vbdev_lvol.o 00:04:10.870 SYMLINK libspdk_fsdev_aio.so 00:04:10.870 LIB libspdk_sock_posix.a 00:04:10.870 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:10.870 LIB libspdk_blobfs_bdev.a 00:04:10.870 SO libspdk_sock_posix.so.6.0 00:04:10.870 SO libspdk_blobfs_bdev.so.6.0 00:04:10.870 SYMLINK libspdk_blobfs_bdev.so 00:04:10.870 CC module/bdev/error/vbdev_error_rpc.o 00:04:10.870 CC module/bdev/malloc/bdev_malloc.o 00:04:10.870 SYMLINK libspdk_sock_posix.so 00:04:10.870 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:10.870 LIB libspdk_bdev_delay.a 00:04:10.870 SO libspdk_bdev_delay.so.6.0 00:04:11.129 LIB libspdk_bdev_gpt.a 00:04:11.129 CC module/bdev/null/bdev_null.o 00:04:11.129 SO libspdk_bdev_gpt.so.6.0 00:04:11.129 LIB libspdk_bdev_error.a 00:04:11.129 CC module/bdev/nvme/bdev_nvme.o 00:04:11.129 SYMLINK libspdk_bdev_delay.so 00:04:11.129 CC module/bdev/passthru/vbdev_passthru.o 00:04:11.129 SO libspdk_bdev_error.so.6.0 00:04:11.129 SYMLINK libspdk_bdev_gpt.so 00:04:11.129 CC module/bdev/null/bdev_null_rpc.o 00:04:11.129 SYMLINK libspdk_bdev_error.so 00:04:11.129 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:11.129 CC module/bdev/nvme/nvme_rpc.o 00:04:11.129 CC module/bdev/raid/bdev_raid.o 00:04:11.129 CC module/bdev/split/vbdev_split.o 00:04:11.393 LIB libspdk_bdev_lvol.a 00:04:11.393 CC module/bdev/nvme/bdev_mdns_client.o 00:04:11.393 LIB libspdk_bdev_malloc.a 00:04:11.393 LIB libspdk_bdev_null.a 00:04:11.393 SO libspdk_bdev_malloc.so.6.0 00:04:11.393 SO libspdk_bdev_null.so.6.0 00:04:11.393 SO libspdk_bdev_lvol.so.6.0 00:04:11.393 SYMLINK libspdk_bdev_null.so 00:04:11.393 SYMLINK libspdk_bdev_malloc.so 00:04:11.393 SYMLINK libspdk_bdev_lvol.so 00:04:11.393 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:11.393 CC module/bdev/raid/bdev_raid_rpc.o 00:04:11.393 CC module/bdev/raid/bdev_raid_sb.o 00:04:11.393 CC module/bdev/raid/raid0.o 00:04:11.393 CC module/bdev/nvme/vbdev_opal.o 00:04:11.393 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:11.393 CC module/bdev/split/vbdev_split_rpc.o 00:04:11.653 LIB libspdk_bdev_passthru.a 00:04:11.653 SO libspdk_bdev_passthru.so.6.0 00:04:11.653 LIB libspdk_bdev_split.a 00:04:11.653 SYMLINK libspdk_bdev_passthru.so 00:04:11.653 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:11.653 SO libspdk_bdev_split.so.6.0 00:04:11.653 CC module/bdev/raid/raid1.o 00:04:11.653 CC module/bdev/raid/concat.o 00:04:11.653 CC module/bdev/raid/raid5f.o 00:04:11.653 SYMLINK libspdk_bdev_split.so 00:04:11.653 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:11.913 CC module/bdev/aio/bdev_aio.o 00:04:11.913 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:11.913 CC module/bdev/ftl/bdev_ftl.o 00:04:11.913 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:11.913 CC module/bdev/aio/bdev_aio_rpc.o 00:04:11.913 CC module/bdev/iscsi/bdev_iscsi.o 00:04:11.913 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:12.174 LIB libspdk_bdev_zone_block.a 00:04:12.174 SO libspdk_bdev_zone_block.so.6.0 00:04:12.174 LIB libspdk_bdev_ftl.a 00:04:12.174 LIB libspdk_bdev_aio.a 00:04:12.174 SYMLINK libspdk_bdev_zone_block.so 00:04:12.174 SO libspdk_bdev_ftl.so.6.0 00:04:12.174 SO libspdk_bdev_aio.so.6.0 00:04:12.174 SYMLINK libspdk_bdev_ftl.so 00:04:12.174 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:12.174 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:12.174 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:12.174 SYMLINK libspdk_bdev_aio.so 00:04:12.174 LIB libspdk_bdev_iscsi.a 00:04:12.174 LIB libspdk_bdev_raid.a 00:04:12.433 SO libspdk_bdev_iscsi.so.6.0 00:04:12.433 SO libspdk_bdev_raid.so.6.0 00:04:12.433 SYMLINK libspdk_bdev_iscsi.so 00:04:12.433 SYMLINK libspdk_bdev_raid.so 00:04:12.693 LIB libspdk_bdev_virtio.a 00:04:12.693 SO libspdk_bdev_virtio.so.6.0 00:04:12.953 SYMLINK libspdk_bdev_virtio.so 00:04:13.523 LIB libspdk_bdev_nvme.a 00:04:13.523 SO libspdk_bdev_nvme.so.7.0 00:04:13.523 SYMLINK libspdk_bdev_nvme.so 00:04:14.093 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:14.093 CC module/event/subsystems/scheduler/scheduler.o 00:04:14.093 CC module/event/subsystems/iobuf/iobuf.o 00:04:14.093 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:14.093 CC module/event/subsystems/fsdev/fsdev.o 00:04:14.093 CC module/event/subsystems/keyring/keyring.o 00:04:14.093 CC module/event/subsystems/vmd/vmd.o 00:04:14.093 CC module/event/subsystems/sock/sock.o 00:04:14.093 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:14.353 LIB libspdk_event_scheduler.a 00:04:14.353 LIB libspdk_event_keyring.a 00:04:14.353 LIB libspdk_event_iobuf.a 00:04:14.353 LIB libspdk_event_fsdev.a 00:04:14.353 LIB libspdk_event_vhost_blk.a 00:04:14.353 SO libspdk_event_scheduler.so.4.0 00:04:14.353 LIB libspdk_event_vmd.a 00:04:14.353 SO libspdk_event_keyring.so.1.0 00:04:14.353 SO libspdk_event_vhost_blk.so.3.0 00:04:14.353 SO libspdk_event_fsdev.so.1.0 00:04:14.353 LIB libspdk_event_sock.a 00:04:14.353 SO libspdk_event_iobuf.so.3.0 00:04:14.353 SO libspdk_event_vmd.so.6.0 00:04:14.353 SO libspdk_event_sock.so.5.0 00:04:14.353 SYMLINK libspdk_event_scheduler.so 00:04:14.353 SYMLINK libspdk_event_keyring.so 00:04:14.353 SYMLINK libspdk_event_vhost_blk.so 00:04:14.353 SYMLINK libspdk_event_fsdev.so 00:04:14.353 SYMLINK libspdk_event_iobuf.so 00:04:14.353 SYMLINK libspdk_event_sock.so 00:04:14.353 SYMLINK libspdk_event_vmd.so 00:04:14.924 CC module/event/subsystems/accel/accel.o 00:04:14.924 LIB libspdk_event_accel.a 00:04:14.924 SO libspdk_event_accel.so.6.0 00:04:14.924 SYMLINK libspdk_event_accel.so 00:04:15.494 CC module/event/subsystems/bdev/bdev.o 00:04:15.494 LIB libspdk_event_bdev.a 00:04:15.755 SO libspdk_event_bdev.so.6.0 00:04:15.755 SYMLINK libspdk_event_bdev.so 00:04:16.015 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:16.015 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:16.015 CC module/event/subsystems/scsi/scsi.o 00:04:16.015 CC module/event/subsystems/nbd/nbd.o 00:04:16.015 CC module/event/subsystems/ublk/ublk.o 00:04:16.275 LIB libspdk_event_scsi.a 00:04:16.275 LIB libspdk_event_ublk.a 00:04:16.275 LIB libspdk_event_nbd.a 00:04:16.275 SO libspdk_event_ublk.so.3.0 00:04:16.275 SO libspdk_event_nbd.so.6.0 00:04:16.275 SO libspdk_event_scsi.so.6.0 00:04:16.275 SYMLINK libspdk_event_ublk.so 00:04:16.275 LIB libspdk_event_nvmf.a 00:04:16.275 SYMLINK libspdk_event_nbd.so 00:04:16.275 SYMLINK libspdk_event_scsi.so 00:04:16.275 SO libspdk_event_nvmf.so.6.0 00:04:16.275 SYMLINK libspdk_event_nvmf.so 00:04:16.535 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:16.535 CC module/event/subsystems/iscsi/iscsi.o 00:04:16.794 LIB libspdk_event_vhost_scsi.a 00:04:16.794 SO libspdk_event_vhost_scsi.so.3.0 00:04:16.794 LIB libspdk_event_iscsi.a 00:04:16.794 SYMLINK libspdk_event_vhost_scsi.so 00:04:16.794 SO libspdk_event_iscsi.so.6.0 00:04:16.794 SYMLINK libspdk_event_iscsi.so 00:04:17.054 SO libspdk.so.6.0 00:04:17.054 SYMLINK libspdk.so 00:04:17.313 CXX app/trace/trace.o 00:04:17.313 CC app/spdk_lspci/spdk_lspci.o 00:04:17.313 CC app/trace_record/trace_record.o 00:04:17.313 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:17.571 CC app/iscsi_tgt/iscsi_tgt.o 00:04:17.571 CC app/nvmf_tgt/nvmf_main.o 00:04:17.571 CC app/spdk_tgt/spdk_tgt.o 00:04:17.571 CC examples/util/zipf/zipf.o 00:04:17.571 CC test/thread/poller_perf/poller_perf.o 00:04:17.571 CC examples/ioat/perf/perf.o 00:04:17.571 LINK spdk_lspci 00:04:17.571 LINK interrupt_tgt 00:04:17.571 LINK nvmf_tgt 00:04:17.571 LINK iscsi_tgt 00:04:17.571 LINK zipf 00:04:17.571 LINK poller_perf 00:04:17.571 LINK spdk_tgt 00:04:17.571 LINK spdk_trace_record 00:04:17.830 LINK ioat_perf 00:04:17.830 LINK spdk_trace 00:04:17.830 CC app/spdk_nvme_perf/perf.o 00:04:17.830 CC examples/ioat/verify/verify.o 00:04:17.830 CC app/spdk_nvme_identify/identify.o 00:04:17.830 CC app/spdk_nvme_discover/discovery_aer.o 00:04:17.830 CC app/spdk_top/spdk_top.o 00:04:17.830 CC app/spdk_dd/spdk_dd.o 00:04:18.088 CC test/dma/test_dma/test_dma.o 00:04:18.088 CC app/fio/nvme/fio_plugin.o 00:04:18.088 CC test/app/bdev_svc/bdev_svc.o 00:04:18.088 CC app/fio/bdev/fio_plugin.o 00:04:18.088 LINK verify 00:04:18.088 LINK spdk_nvme_discover 00:04:18.088 LINK bdev_svc 00:04:18.347 LINK spdk_dd 00:04:18.347 CC app/vhost/vhost.o 00:04:18.347 CC examples/thread/thread/thread_ex.o 00:04:18.347 LINK test_dma 00:04:18.605 LINK vhost 00:04:18.605 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:18.605 LINK spdk_bdev 00:04:18.605 LINK spdk_nvme 00:04:18.605 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:18.605 LINK thread 00:04:18.605 LINK spdk_nvme_perf 00:04:18.864 LINK spdk_nvme_identify 00:04:18.864 TEST_HEADER include/spdk/accel.h 00:04:18.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:18.864 TEST_HEADER include/spdk/accel_module.h 00:04:18.864 TEST_HEADER include/spdk/assert.h 00:04:18.864 TEST_HEADER include/spdk/barrier.h 00:04:18.864 TEST_HEADER include/spdk/base64.h 00:04:18.864 TEST_HEADER include/spdk/bdev.h 00:04:18.864 TEST_HEADER include/spdk/bdev_module.h 00:04:18.864 TEST_HEADER include/spdk/bdev_zone.h 00:04:18.864 TEST_HEADER include/spdk/bit_array.h 00:04:18.864 TEST_HEADER include/spdk/bit_pool.h 00:04:18.864 TEST_HEADER include/spdk/blob_bdev.h 00:04:18.864 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:18.864 TEST_HEADER include/spdk/blobfs.h 00:04:18.864 TEST_HEADER include/spdk/blob.h 00:04:18.864 TEST_HEADER include/spdk/conf.h 00:04:18.864 TEST_HEADER include/spdk/config.h 00:04:18.864 TEST_HEADER include/spdk/cpuset.h 00:04:18.864 TEST_HEADER include/spdk/crc16.h 00:04:18.864 TEST_HEADER include/spdk/crc32.h 00:04:18.864 TEST_HEADER include/spdk/crc64.h 00:04:18.864 TEST_HEADER include/spdk/dif.h 00:04:18.864 TEST_HEADER include/spdk/dma.h 00:04:18.864 TEST_HEADER include/spdk/endian.h 00:04:18.864 TEST_HEADER include/spdk/env_dpdk.h 00:04:18.864 TEST_HEADER include/spdk/env.h 00:04:18.864 TEST_HEADER include/spdk/event.h 00:04:18.864 TEST_HEADER include/spdk/fd_group.h 00:04:18.864 TEST_HEADER include/spdk/fd.h 00:04:18.864 TEST_HEADER include/spdk/file.h 00:04:18.864 TEST_HEADER include/spdk/fsdev.h 00:04:18.864 TEST_HEADER include/spdk/fsdev_module.h 00:04:18.864 TEST_HEADER include/spdk/ftl.h 00:04:18.864 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:18.864 TEST_HEADER include/spdk/gpt_spec.h 00:04:18.864 LINK spdk_top 00:04:18.864 TEST_HEADER include/spdk/hexlify.h 00:04:18.864 TEST_HEADER include/spdk/histogram_data.h 00:04:18.864 TEST_HEADER include/spdk/idxd.h 00:04:18.864 TEST_HEADER include/spdk/idxd_spec.h 00:04:18.864 TEST_HEADER include/spdk/init.h 00:04:18.864 TEST_HEADER include/spdk/ioat.h 00:04:18.864 TEST_HEADER include/spdk/ioat_spec.h 00:04:18.864 TEST_HEADER include/spdk/iscsi_spec.h 00:04:18.864 TEST_HEADER include/spdk/json.h 00:04:18.864 TEST_HEADER include/spdk/jsonrpc.h 00:04:18.864 TEST_HEADER include/spdk/keyring.h 00:04:18.864 TEST_HEADER include/spdk/keyring_module.h 00:04:18.864 TEST_HEADER include/spdk/likely.h 00:04:18.864 TEST_HEADER include/spdk/log.h 00:04:18.864 TEST_HEADER include/spdk/lvol.h 00:04:18.864 TEST_HEADER include/spdk/md5.h 00:04:18.864 TEST_HEADER include/spdk/memory.h 00:04:18.864 TEST_HEADER include/spdk/mmio.h 00:04:18.864 TEST_HEADER include/spdk/nbd.h 00:04:18.864 TEST_HEADER include/spdk/net.h 00:04:18.864 TEST_HEADER include/spdk/notify.h 00:04:18.864 TEST_HEADER include/spdk/nvme.h 00:04:18.864 TEST_HEADER include/spdk/nvme_intel.h 00:04:18.864 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:18.864 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:18.864 TEST_HEADER include/spdk/nvme_spec.h 00:04:18.864 TEST_HEADER include/spdk/nvme_zns.h 00:04:18.864 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:18.864 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:18.864 TEST_HEADER include/spdk/nvmf.h 00:04:18.864 TEST_HEADER include/spdk/nvmf_spec.h 00:04:18.864 TEST_HEADER include/spdk/nvmf_transport.h 00:04:18.864 TEST_HEADER include/spdk/opal.h 00:04:18.864 TEST_HEADER include/spdk/opal_spec.h 00:04:18.864 TEST_HEADER include/spdk/pci_ids.h 00:04:18.864 TEST_HEADER include/spdk/pipe.h 00:04:18.864 TEST_HEADER include/spdk/queue.h 00:04:18.864 TEST_HEADER include/spdk/reduce.h 00:04:18.864 TEST_HEADER include/spdk/rpc.h 00:04:18.864 TEST_HEADER include/spdk/scheduler.h 00:04:18.864 TEST_HEADER include/spdk/scsi.h 00:04:18.864 TEST_HEADER include/spdk/scsi_spec.h 00:04:18.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:18.864 CC test/event/event_perf/event_perf.o 00:04:18.864 TEST_HEADER include/spdk/sock.h 00:04:18.864 TEST_HEADER include/spdk/stdinc.h 00:04:18.864 TEST_HEADER include/spdk/string.h 00:04:18.864 TEST_HEADER include/spdk/thread.h 00:04:18.864 TEST_HEADER include/spdk/trace.h 00:04:18.864 TEST_HEADER include/spdk/trace_parser.h 00:04:18.864 TEST_HEADER include/spdk/tree.h 00:04:18.864 TEST_HEADER include/spdk/ublk.h 00:04:18.864 TEST_HEADER include/spdk/util.h 00:04:18.864 TEST_HEADER include/spdk/uuid.h 00:04:18.864 TEST_HEADER include/spdk/version.h 00:04:18.864 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:18.864 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:18.864 TEST_HEADER include/spdk/vhost.h 00:04:18.864 CC test/env/mem_callbacks/mem_callbacks.o 00:04:18.864 TEST_HEADER include/spdk/vmd.h 00:04:18.864 TEST_HEADER include/spdk/xor.h 00:04:18.864 TEST_HEADER include/spdk/zipf.h 00:04:18.864 CXX test/cpp_headers/accel.o 00:04:19.139 LINK nvme_fuzz 00:04:19.139 CC examples/sock/hello_world/hello_sock.o 00:04:19.139 CC examples/vmd/lsvmd/lsvmd.o 00:04:19.139 LINK event_perf 00:04:19.139 CC examples/idxd/perf/perf.o 00:04:19.139 CXX test/cpp_headers/accel_module.o 00:04:19.139 LINK mem_callbacks 00:04:19.139 LINK lsvmd 00:04:19.139 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:19.139 CXX test/cpp_headers/assert.o 00:04:19.397 LINK hello_sock 00:04:19.397 CC test/event/reactor/reactor.o 00:04:19.397 CC examples/accel/perf/accel_perf.o 00:04:19.397 CC test/env/vtophys/vtophys.o 00:04:19.397 LINK vhost_fuzz 00:04:19.397 CXX test/cpp_headers/barrier.o 00:04:19.397 CC examples/vmd/led/led.o 00:04:19.397 LINK idxd_perf 00:04:19.397 LINK reactor 00:04:19.397 LINK hello_fsdev 00:04:19.397 LINK vtophys 00:04:19.655 CXX test/cpp_headers/base64.o 00:04:19.655 LINK led 00:04:19.655 CC examples/blob/hello_world/hello_blob.o 00:04:19.655 CC test/event/reactor_perf/reactor_perf.o 00:04:19.655 CC test/nvme/aer/aer.o 00:04:19.655 CC test/rpc_client/rpc_client_test.o 00:04:19.655 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:19.655 CXX test/cpp_headers/bdev.o 00:04:19.655 CC test/app/histogram_perf/histogram_perf.o 00:04:19.655 LINK reactor_perf 00:04:19.913 LINK hello_blob 00:04:19.913 LINK env_dpdk_post_init 00:04:19.913 LINK accel_perf 00:04:19.913 CC test/accel/dif/dif.o 00:04:19.913 LINK rpc_client_test 00:04:19.913 CXX test/cpp_headers/bdev_module.o 00:04:19.913 LINK histogram_perf 00:04:19.913 LINK aer 00:04:19.913 CC test/event/app_repeat/app_repeat.o 00:04:19.913 CXX test/cpp_headers/bdev_zone.o 00:04:20.169 CC test/env/memory/memory_ut.o 00:04:20.169 CC examples/blob/cli/blobcli.o 00:04:20.169 LINK app_repeat 00:04:20.169 CC test/app/jsoncat/jsoncat.o 00:04:20.169 CC examples/nvme/hello_world/hello_world.o 00:04:20.169 CC test/nvme/reset/reset.o 00:04:20.169 CC test/blobfs/mkfs/mkfs.o 00:04:20.169 CXX test/cpp_headers/bit_array.o 00:04:20.427 LINK jsoncat 00:04:20.427 LINK mkfs 00:04:20.427 CC test/event/scheduler/scheduler.o 00:04:20.427 LINK hello_world 00:04:20.427 CXX test/cpp_headers/bit_pool.o 00:04:20.427 LINK iscsi_fuzz 00:04:20.427 LINK reset 00:04:20.427 CC test/app/stub/stub.o 00:04:20.684 LINK dif 00:04:20.684 CXX test/cpp_headers/blob_bdev.o 00:04:20.684 CXX test/cpp_headers/blobfs_bdev.o 00:04:20.684 LINK scheduler 00:04:20.684 CXX test/cpp_headers/blobfs.o 00:04:20.684 LINK blobcli 00:04:20.684 CC examples/nvme/reconnect/reconnect.o 00:04:20.684 LINK stub 00:04:20.684 CC test/nvme/sgl/sgl.o 00:04:20.684 CXX test/cpp_headers/blob.o 00:04:20.942 LINK memory_ut 00:04:20.942 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:20.942 CC examples/nvme/arbitration/arbitration.o 00:04:20.942 CC examples/bdev/hello_world/hello_bdev.o 00:04:20.942 CC test/nvme/e2edp/nvme_dp.o 00:04:20.942 CXX test/cpp_headers/conf.o 00:04:20.942 CC test/lvol/esnap/esnap.o 00:04:20.942 CC test/bdev/bdevio/bdevio.o 00:04:20.942 LINK sgl 00:04:20.942 LINK reconnect 00:04:21.253 CXX test/cpp_headers/config.o 00:04:21.253 CXX test/cpp_headers/cpuset.o 00:04:21.253 CC test/env/pci/pci_ut.o 00:04:21.253 LINK hello_bdev 00:04:21.253 CXX test/cpp_headers/crc16.o 00:04:21.253 LINK nvme_dp 00:04:21.253 LINK arbitration 00:04:21.253 CC test/nvme/overhead/overhead.o 00:04:21.253 CXX test/cpp_headers/crc32.o 00:04:21.253 CXX test/cpp_headers/crc64.o 00:04:21.253 LINK bdevio 00:04:21.511 LINK nvme_manage 00:04:21.511 CC test/nvme/err_injection/err_injection.o 00:04:21.511 CC examples/bdev/bdevperf/bdevperf.o 00:04:21.511 CXX test/cpp_headers/dif.o 00:04:21.511 CC examples/nvme/hotplug/hotplug.o 00:04:21.511 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:21.511 LINK overhead 00:04:21.511 CXX test/cpp_headers/dma.o 00:04:21.511 LINK pci_ut 00:04:21.511 LINK err_injection 00:04:21.511 CC examples/nvme/abort/abort.o 00:04:21.511 LINK cmb_copy 00:04:21.511 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:21.769 CXX test/cpp_headers/endian.o 00:04:21.769 LINK hotplug 00:04:21.769 CC test/nvme/startup/startup.o 00:04:21.769 CXX test/cpp_headers/env_dpdk.o 00:04:21.769 LINK pmr_persistence 00:04:21.769 CC test/nvme/reserve/reserve.o 00:04:21.769 CC test/nvme/simple_copy/simple_copy.o 00:04:21.769 CC test/nvme/connect_stress/connect_stress.o 00:04:21.769 LINK startup 00:04:22.027 CC test/nvme/boot_partition/boot_partition.o 00:04:22.027 CXX test/cpp_headers/env.o 00:04:22.027 CXX test/cpp_headers/event.o 00:04:22.027 LINK abort 00:04:22.027 LINK connect_stress 00:04:22.027 LINK reserve 00:04:22.027 LINK boot_partition 00:04:22.027 LINK simple_copy 00:04:22.027 CXX test/cpp_headers/fd_group.o 00:04:22.027 CC test/nvme/compliance/nvme_compliance.o 00:04:22.027 CXX test/cpp_headers/fd.o 00:04:22.285 CC test/nvme/fused_ordering/fused_ordering.o 00:04:22.285 CXX test/cpp_headers/file.o 00:04:22.285 LINK bdevperf 00:04:22.285 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:22.285 CXX test/cpp_headers/fsdev.o 00:04:22.285 CC test/nvme/fdp/fdp.o 00:04:22.285 CXX test/cpp_headers/fsdev_module.o 00:04:22.285 CC test/nvme/cuse/cuse.o 00:04:22.285 LINK fused_ordering 00:04:22.285 CXX test/cpp_headers/ftl.o 00:04:22.285 CXX test/cpp_headers/fuse_dispatcher.o 00:04:22.542 LINK doorbell_aers 00:04:22.542 CXX test/cpp_headers/gpt_spec.o 00:04:22.542 LINK nvme_compliance 00:04:22.542 CXX test/cpp_headers/hexlify.o 00:04:22.542 CXX test/cpp_headers/histogram_data.o 00:04:22.542 CXX test/cpp_headers/idxd.o 00:04:22.542 CXX test/cpp_headers/idxd_spec.o 00:04:22.542 CXX test/cpp_headers/init.o 00:04:22.542 CC examples/nvmf/nvmf/nvmf.o 00:04:22.542 LINK fdp 00:04:22.542 CXX test/cpp_headers/ioat.o 00:04:22.542 CXX test/cpp_headers/ioat_spec.o 00:04:22.800 CXX test/cpp_headers/iscsi_spec.o 00:04:22.800 CXX test/cpp_headers/json.o 00:04:22.800 CXX test/cpp_headers/jsonrpc.o 00:04:22.800 CXX test/cpp_headers/keyring.o 00:04:22.800 CXX test/cpp_headers/keyring_module.o 00:04:22.800 CXX test/cpp_headers/likely.o 00:04:22.800 CXX test/cpp_headers/log.o 00:04:22.800 CXX test/cpp_headers/lvol.o 00:04:22.800 CXX test/cpp_headers/md5.o 00:04:22.800 CXX test/cpp_headers/memory.o 00:04:22.800 CXX test/cpp_headers/mmio.o 00:04:22.800 LINK nvmf 00:04:22.800 CXX test/cpp_headers/nbd.o 00:04:22.800 CXX test/cpp_headers/net.o 00:04:23.057 CXX test/cpp_headers/notify.o 00:04:23.057 CXX test/cpp_headers/nvme.o 00:04:23.057 CXX test/cpp_headers/nvme_intel.o 00:04:23.057 CXX test/cpp_headers/nvme_ocssd.o 00:04:23.057 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:23.057 CXX test/cpp_headers/nvme_spec.o 00:04:23.057 CXX test/cpp_headers/nvme_zns.o 00:04:23.057 CXX test/cpp_headers/nvmf_cmd.o 00:04:23.057 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:23.057 CXX test/cpp_headers/nvmf.o 00:04:23.057 CXX test/cpp_headers/nvmf_spec.o 00:04:23.057 CXX test/cpp_headers/nvmf_transport.o 00:04:23.057 CXX test/cpp_headers/opal.o 00:04:23.317 CXX test/cpp_headers/opal_spec.o 00:04:23.317 CXX test/cpp_headers/pci_ids.o 00:04:23.317 CXX test/cpp_headers/pipe.o 00:04:23.317 CXX test/cpp_headers/queue.o 00:04:23.317 CXX test/cpp_headers/reduce.o 00:04:23.317 CXX test/cpp_headers/rpc.o 00:04:23.317 CXX test/cpp_headers/scheduler.o 00:04:23.317 CXX test/cpp_headers/scsi.o 00:04:23.317 CXX test/cpp_headers/scsi_spec.o 00:04:23.317 CXX test/cpp_headers/sock.o 00:04:23.317 CXX test/cpp_headers/stdinc.o 00:04:23.317 CXX test/cpp_headers/string.o 00:04:23.317 CXX test/cpp_headers/thread.o 00:04:23.317 CXX test/cpp_headers/trace.o 00:04:23.576 CXX test/cpp_headers/trace_parser.o 00:04:23.576 CXX test/cpp_headers/tree.o 00:04:23.576 CXX test/cpp_headers/ublk.o 00:04:23.576 CXX test/cpp_headers/util.o 00:04:23.576 CXX test/cpp_headers/uuid.o 00:04:23.576 CXX test/cpp_headers/version.o 00:04:23.576 CXX test/cpp_headers/vfio_user_pci.o 00:04:23.576 CXX test/cpp_headers/vfio_user_spec.o 00:04:23.576 LINK cuse 00:04:23.576 CXX test/cpp_headers/vhost.o 00:04:23.576 CXX test/cpp_headers/vmd.o 00:04:23.576 CXX test/cpp_headers/xor.o 00:04:23.576 CXX test/cpp_headers/zipf.o 00:04:26.119 LINK esnap 00:04:26.690 00:04:26.690 real 1m15.630s 00:04:26.690 user 5m52.851s 00:04:26.690 sys 1m4.005s 00:04:26.690 01:48:31 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:26.690 01:48:31 make -- common/autotest_common.sh@10 -- $ set +x 00:04:26.690 ************************************ 00:04:26.690 END TEST make 00:04:26.690 ************************************ 00:04:26.690 01:48:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:26.690 01:48:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:26.690 01:48:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:26.690 01:48:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.690 01:48:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:26.690 01:48:31 -- pm/common@44 -- $ pid=6196 00:04:26.690 01:48:31 -- pm/common@50 -- $ kill -TERM 6196 00:04:26.690 01:48:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.690 01:48:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:26.690 01:48:31 -- pm/common@44 -- $ pid=6198 00:04:26.690 01:48:31 -- pm/common@50 -- $ kill -TERM 6198 00:04:26.690 01:48:32 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:26.690 01:48:32 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:26.690 01:48:32 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:26.690 01:48:32 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:26.690 01:48:32 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.690 01:48:32 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.690 01:48:32 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.690 01:48:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.690 01:48:32 -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.690 01:48:32 -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.690 01:48:32 -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.690 01:48:32 -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.690 01:48:32 -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.690 01:48:32 -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.690 01:48:32 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.690 01:48:32 -- scripts/common.sh@344 -- # case "$op" in 00:04:26.690 01:48:32 -- scripts/common.sh@345 -- # : 1 00:04:26.690 01:48:32 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.950 01:48:32 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.950 01:48:32 -- scripts/common.sh@365 -- # decimal 1 00:04:26.950 01:48:32 -- scripts/common.sh@353 -- # local d=1 00:04:26.950 01:48:32 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.950 01:48:32 -- scripts/common.sh@355 -- # echo 1 00:04:26.950 01:48:32 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.950 01:48:32 -- scripts/common.sh@366 -- # decimal 2 00:04:26.950 01:48:32 -- scripts/common.sh@353 -- # local d=2 00:04:26.950 01:48:32 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.950 01:48:32 -- scripts/common.sh@355 -- # echo 2 00:04:26.950 01:48:32 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.950 01:48:32 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.950 01:48:32 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.950 01:48:32 -- scripts/common.sh@368 -- # return 0 00:04:26.950 01:48:32 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.950 01:48:32 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.950 --rc genhtml_branch_coverage=1 00:04:26.950 --rc genhtml_function_coverage=1 00:04:26.950 --rc genhtml_legend=1 00:04:26.950 --rc geninfo_all_blocks=1 00:04:26.950 --rc geninfo_unexecuted_blocks=1 00:04:26.950 00:04:26.950 ' 00:04:26.950 01:48:32 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.950 --rc genhtml_branch_coverage=1 00:04:26.950 --rc genhtml_function_coverage=1 00:04:26.950 --rc genhtml_legend=1 00:04:26.950 --rc geninfo_all_blocks=1 00:04:26.950 --rc geninfo_unexecuted_blocks=1 00:04:26.950 00:04:26.950 ' 00:04:26.950 01:48:32 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:26.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.951 --rc genhtml_branch_coverage=1 00:04:26.951 --rc genhtml_function_coverage=1 00:04:26.951 --rc genhtml_legend=1 00:04:26.951 --rc geninfo_all_blocks=1 00:04:26.951 --rc geninfo_unexecuted_blocks=1 00:04:26.951 00:04:26.951 ' 00:04:26.951 01:48:32 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:26.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.951 --rc genhtml_branch_coverage=1 00:04:26.951 --rc genhtml_function_coverage=1 00:04:26.951 --rc genhtml_legend=1 00:04:26.951 --rc geninfo_all_blocks=1 00:04:26.951 --rc geninfo_unexecuted_blocks=1 00:04:26.951 00:04:26.951 ' 00:04:26.951 01:48:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.951 01:48:32 -- nvmf/common.sh@7 -- # uname -s 00:04:26.951 01:48:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.951 01:48:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.951 01:48:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.951 01:48:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.951 01:48:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.951 01:48:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.951 01:48:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.951 01:48:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.951 01:48:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.951 01:48:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.951 01:48:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6385b0b-8611-4302-b9e4-4a678e16f84f 00:04:26.951 01:48:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=f6385b0b-8611-4302-b9e4-4a678e16f84f 00:04:26.951 01:48:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.951 01:48:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.951 01:48:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.951 01:48:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.951 01:48:32 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.951 01:48:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:26.951 01:48:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.951 01:48:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.951 01:48:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.951 01:48:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.951 01:48:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.951 01:48:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.951 01:48:32 -- paths/export.sh@5 -- # export PATH 00:04:26.951 01:48:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.951 01:48:32 -- nvmf/common.sh@51 -- # : 0 00:04:26.951 01:48:32 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:26.951 01:48:32 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:26.951 01:48:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.951 01:48:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.951 01:48:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.951 01:48:32 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:26.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:26.951 01:48:32 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:26.951 01:48:32 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:26.951 01:48:32 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:26.951 01:48:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:26.951 01:48:32 -- spdk/autotest.sh@32 -- # uname -s 00:04:26.951 01:48:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:26.951 01:48:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:26.951 01:48:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:26.951 01:48:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:26.951 01:48:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:26.951 01:48:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:26.951 01:48:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:26.951 01:48:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:26.951 01:48:32 -- spdk/autotest.sh@48 -- # udevadm_pid=66485 00:04:26.951 01:48:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:26.951 01:48:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:26.951 01:48:32 -- pm/common@17 -- # local monitor 00:04:26.951 01:48:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.951 01:48:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.951 01:48:32 -- pm/common@21 -- # date +%s 00:04:26.951 01:48:32 -- pm/common@25 -- # sleep 1 00:04:26.951 01:48:32 -- pm/common@21 -- # date +%s 00:04:26.951 01:48:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733536112 00:04:26.951 01:48:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733536112 00:04:26.951 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733536112_collect-vmstat.pm.log 00:04:26.951 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733536112_collect-cpu-load.pm.log 00:04:27.892 01:48:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:27.892 01:48:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:27.892 01:48:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:27.892 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:04:27.892 01:48:33 -- spdk/autotest.sh@59 -- # create_test_list 00:04:27.892 01:48:33 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:27.892 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:04:27.892 01:48:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:28.153 01:48:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:28.153 01:48:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:28.153 01:48:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:28.153 01:48:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:28.153 01:48:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:28.153 01:48:33 -- common/autotest_common.sh@1455 -- # uname 00:04:28.153 01:48:33 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:28.153 01:48:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:28.153 01:48:33 -- common/autotest_common.sh@1475 -- # uname 00:04:28.153 01:48:33 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:28.153 01:48:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:28.153 01:48:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:28.153 lcov: LCOV version 1.15 00:04:28.153 01:48:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:43.079 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:43.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:57.985 01:49:01 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:57.985 01:49:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:57.985 01:49:01 -- common/autotest_common.sh@10 -- # set +x 00:04:57.985 01:49:01 -- spdk/autotest.sh@78 -- # rm -f 00:04:57.985 01:49:01 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:57.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.985 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:57.985 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:57.985 01:49:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:57.985 01:49:02 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:57.985 01:49:02 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:57.985 01:49:02 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:57.985 01:49:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.985 01:49:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:57.985 01:49:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:57.985 01:49:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.985 01:49:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:04:57.985 01:49:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:04:57.985 01:49:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.985 01:49:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:04:57.985 01:49:02 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:04:57.985 01:49:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:57.985 01:49:02 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:57.985 01:49:02 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:57.985 01:49:02 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:57.985 01:49:02 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:57.985 01:49:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:57.985 01:49:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.985 01:49:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.985 01:49:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:57.985 01:49:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:57.985 01:49:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:57.985 No valid GPT data, bailing 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # pt= 00:04:57.985 01:49:02 -- scripts/common.sh@395 -- # return 1 00:04:57.985 01:49:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:57.985 1+0 records in 00:04:57.985 1+0 records out 00:04:57.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560496 s, 187 MB/s 00:04:57.985 01:49:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.985 01:49:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.985 01:49:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:57.985 01:49:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:57.985 01:49:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:57.985 No valid GPT data, bailing 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # pt= 00:04:57.985 01:49:02 -- scripts/common.sh@395 -- # return 1 00:04:57.985 01:49:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:57.985 1+0 records in 00:04:57.985 1+0 records out 00:04:57.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434308 s, 241 MB/s 00:04:57.985 01:49:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.985 01:49:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.985 01:49:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:57.985 01:49:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:57.985 01:49:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:57.985 No valid GPT data, bailing 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # pt= 00:04:57.985 01:49:02 -- scripts/common.sh@395 -- # return 1 00:04:57.985 01:49:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:57.985 1+0 records in 00:04:57.985 1+0 records out 00:04:57.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635083 s, 165 MB/s 00:04:57.985 01:49:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:57.985 01:49:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:57.985 01:49:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:57.985 01:49:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:57.985 01:49:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:57.985 No valid GPT data, bailing 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:57.985 01:49:02 -- scripts/common.sh@394 -- # pt= 00:04:57.985 01:49:02 -- scripts/common.sh@395 -- # return 1 00:04:57.985 01:49:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:57.985 1+0 records in 00:04:57.985 1+0 records out 00:04:57.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00442079 s, 237 MB/s 00:04:57.985 01:49:02 -- spdk/autotest.sh@105 -- # sync 00:04:57.985 01:49:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:57.985 01:49:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:57.985 01:49:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:00.530 01:49:05 -- spdk/autotest.sh@111 -- # uname -s 00:05:00.530 01:49:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:00.530 01:49:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:00.530 01:49:05 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.101 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.101 Hugepages 00:05:01.101 node hugesize free / total 00:05:01.101 node0 1048576kB 0 / 0 00:05:01.101 node0 2048kB 0 / 0 00:05:01.101 00:05:01.101 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.361 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.361 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:01.361 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:01.361 01:49:06 -- spdk/autotest.sh@117 -- # uname -s 00:05:01.361 01:49:06 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:01.361 01:49:06 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:01.361 01:49:06 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.299 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.299 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.556 01:49:07 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:03.493 01:49:08 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:03.493 01:49:08 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:03.493 01:49:08 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.493 01:49:08 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:03.493 01:49:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:03.493 01:49:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:03.493 01:49:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.493 01:49:08 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.493 01:49:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:03.493 01:49:08 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:03.493 01:49:08 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:03.493 01:49:08 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.063 Waiting for block devices as requested 00:05:04.063 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.324 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.324 01:49:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:04.324 01:49:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:04.324 01:49:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:04.324 01:49:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:04.324 01:49:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:04.324 01:49:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1541 -- # continue 00:05:04.324 01:49:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:04.324 01:49:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.324 01:49:09 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:04.324 01:49:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:04.324 01:49:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:04.324 01:49:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:04.324 01:49:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:04.324 01:49:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:04.324 01:49:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:04.324 01:49:09 -- common/autotest_common.sh@1541 -- # continue 00:05:04.324 01:49:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:04.324 01:49:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:04.324 01:49:09 -- common/autotest_common.sh@10 -- # set +x 00:05:04.324 01:49:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:04.324 01:49:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:04.324 01:49:09 -- common/autotest_common.sh@10 -- # set +x 00:05:04.584 01:49:09 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.154 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.414 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.414 01:49:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:05.414 01:49:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:05.414 01:49:10 -- common/autotest_common.sh@10 -- # set +x 00:05:05.414 01:49:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:05.414 01:49:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:05.414 01:49:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.414 01:49:10 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:05.414 01:49:10 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:05.414 01:49:10 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:05.414 01:49:10 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:05.414 01:49:10 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:05.414 01:49:10 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:05.414 01:49:10 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:05.414 01:49:10 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.414 01:49:10 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.414 01:49:10 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:05.674 01:49:10 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:05:05.674 01:49:10 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.674 01:49:10 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:05.674 01:49:10 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.674 01:49:10 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:05.674 01:49:10 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.674 01:49:10 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:05.674 01:49:10 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.674 01:49:10 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:05.674 01:49:10 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.674 01:49:10 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:05.674 01:49:10 -- common/autotest_common.sh@1570 -- # return 0 00:05:05.674 01:49:10 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:05.674 01:49:10 -- common/autotest_common.sh@1578 -- # return 0 00:05:05.674 01:49:10 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:05.674 01:49:10 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:05.674 01:49:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.674 01:49:10 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.674 01:49:10 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:05.674 01:49:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.674 01:49:10 -- common/autotest_common.sh@10 -- # set +x 00:05:05.674 01:49:10 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:05.674 01:49:10 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.674 01:49:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.674 01:49:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.674 01:49:10 -- common/autotest_common.sh@10 -- # set +x 00:05:05.674 ************************************ 00:05:05.674 START TEST env 00:05:05.674 ************************************ 00:05:05.674 01:49:10 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.674 * Looking for test storage... 00:05:05.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.674 01:49:11 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:05.674 01:49:11 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:05.674 01:49:11 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:05.934 01:49:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.934 01:49:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.934 01:49:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.934 01:49:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.934 01:49:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.934 01:49:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.934 01:49:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.934 01:49:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.934 01:49:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.934 01:49:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.934 01:49:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.934 01:49:11 env -- scripts/common.sh@344 -- # case "$op" in 00:05:05.934 01:49:11 env -- scripts/common.sh@345 -- # : 1 00:05:05.934 01:49:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.934 01:49:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.934 01:49:11 env -- scripts/common.sh@365 -- # decimal 1 00:05:05.934 01:49:11 env -- scripts/common.sh@353 -- # local d=1 00:05:05.934 01:49:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.934 01:49:11 env -- scripts/common.sh@355 -- # echo 1 00:05:05.934 01:49:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.934 01:49:11 env -- scripts/common.sh@366 -- # decimal 2 00:05:05.934 01:49:11 env -- scripts/common.sh@353 -- # local d=2 00:05:05.934 01:49:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.934 01:49:11 env -- scripts/common.sh@355 -- # echo 2 00:05:05.934 01:49:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.934 01:49:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.934 01:49:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.934 01:49:11 env -- scripts/common.sh@368 -- # return 0 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.934 --rc genhtml_branch_coverage=1 00:05:05.934 --rc genhtml_function_coverage=1 00:05:05.934 --rc genhtml_legend=1 00:05:05.934 --rc geninfo_all_blocks=1 00:05:05.934 --rc geninfo_unexecuted_blocks=1 00:05:05.934 00:05:05.934 ' 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.934 --rc genhtml_branch_coverage=1 00:05:05.934 --rc genhtml_function_coverage=1 00:05:05.934 --rc genhtml_legend=1 00:05:05.934 --rc geninfo_all_blocks=1 00:05:05.934 --rc geninfo_unexecuted_blocks=1 00:05:05.934 00:05:05.934 ' 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.934 --rc genhtml_branch_coverage=1 00:05:05.934 --rc genhtml_function_coverage=1 00:05:05.934 --rc genhtml_legend=1 00:05:05.934 --rc geninfo_all_blocks=1 00:05:05.934 --rc geninfo_unexecuted_blocks=1 00:05:05.934 00:05:05.934 ' 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:05.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.934 --rc genhtml_branch_coverage=1 00:05:05.934 --rc genhtml_function_coverage=1 00:05:05.934 --rc genhtml_legend=1 00:05:05.934 --rc geninfo_all_blocks=1 00:05:05.934 --rc geninfo_unexecuted_blocks=1 00:05:05.934 00:05:05.934 ' 00:05:05.934 01:49:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.934 01:49:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.934 01:49:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.934 ************************************ 00:05:05.934 START TEST env_memory 00:05:05.934 ************************************ 00:05:05.934 01:49:11 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.934 00:05:05.934 00:05:05.934 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.934 http://cunit.sourceforge.net/ 00:05:05.934 00:05:05.934 00:05:05.934 Suite: memory 00:05:05.934 Test: alloc and free memory map ...[2024-12-07 01:49:11.255058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.934 passed 00:05:05.934 Test: mem map translation ...[2024-12-07 01:49:11.297329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.934 [2024-12-07 01:49:11.297373] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.934 [2024-12-07 01:49:11.297448] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.934 [2024-12-07 01:49:11.297482] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.934 passed 00:05:05.934 Test: mem map registration ...[2024-12-07 01:49:11.361427] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:05.934 [2024-12-07 01:49:11.361467] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:05.934 passed 00:05:06.195 Test: mem map adjacent registrations ...passed 00:05:06.195 00:05:06.195 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.195 suites 1 1 n/a 0 0 00:05:06.195 tests 4 4 4 0 0 00:05:06.195 asserts 152 152 152 0 n/a 00:05:06.195 00:05:06.195 Elapsed time = 0.235 seconds 00:05:06.195 00:05:06.195 real 0m0.279s 00:05:06.195 user 0m0.250s 00:05:06.195 sys 0m0.021s 00:05:06.195 01:49:11 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.195 01:49:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.195 ************************************ 00:05:06.195 END TEST env_memory 00:05:06.195 ************************************ 00:05:06.195 01:49:11 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.195 01:49:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.195 01:49:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.195 01:49:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.195 ************************************ 00:05:06.195 START TEST env_vtophys 00:05:06.195 ************************************ 00:05:06.195 01:49:11 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.195 EAL: lib.eal log level changed from notice to debug 00:05:06.195 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.195 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.195 EAL: Maximum logical cores by configuration: 128 00:05:06.195 EAL: Detected CPU lcores: 10 00:05:06.195 EAL: Detected NUMA nodes: 1 00:05:06.195 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:06.195 EAL: Detected shared linkage of DPDK 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:06.195 EAL: Registered [vdev] bus. 00:05:06.195 EAL: bus.vdev log level changed from disabled to notice 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:06.195 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:06.195 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:06.195 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:06.195 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.195 EAL: No shared files mode enabled, IPC is disabled 00:05:06.195 EAL: Selected IOVA mode 'PA' 00:05:06.195 EAL: Probing VFIO support... 00:05:06.195 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.195 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.195 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.195 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.195 EAL: Setting up physically contiguous memory... 00:05:06.195 EAL: Setting maximum number of open files to 524288 00:05:06.196 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.196 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.196 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.196 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.196 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.196 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.196 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.196 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.196 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.196 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.196 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.196 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.196 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.196 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.196 EAL: Hugepages will be freed exactly as allocated. 00:05:06.196 EAL: No shared files mode enabled, IPC is disabled 00:05:06.196 EAL: No shared files mode enabled, IPC is disabled 00:05:06.456 EAL: TSC frequency is ~2290000 KHz 00:05:06.456 EAL: Main lcore 0 is ready (tid=7f77d047fa40;cpuset=[0]) 00:05:06.456 EAL: Trying to obtain current memory policy. 00:05:06.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.456 EAL: Restoring previous memory policy: 0 00:05:06.456 EAL: request: mp_malloc_sync 00:05:06.456 EAL: No shared files mode enabled, IPC is disabled 00:05:06.456 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.456 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.456 EAL: No shared files mode enabled, IPC is disabled 00:05:06.456 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.456 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.456 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.456 00:05:06.456 00:05:06.456 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.456 http://cunit.sourceforge.net/ 00:05:06.456 00:05:06.456 00:05:06.456 Suite: components_suite 00:05:06.716 Test: vtophys_malloc_test ...passed 00:05:06.716 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.716 EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.716 EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.716 EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was shrunk by 18MB 00:05:06.716 EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 34MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was shrunk by 34MB 00:05:06.716 EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 66MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was shrunk by 66MB 00:05:06.716 EAL: Trying to obtain current memory policy. 00:05:06.716 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.716 EAL: Restoring previous memory policy: 4 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.716 EAL: request: mp_malloc_sync 00:05:06.716 EAL: No shared files mode enabled, IPC is disabled 00:05:06.716 EAL: Heap on socket 0 was expanded by 130MB 00:05:06.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.975 EAL: request: mp_malloc_sync 00:05:06.975 EAL: No shared files mode enabled, IPC is disabled 00:05:06.975 EAL: Heap on socket 0 was shrunk by 130MB 00:05:06.975 EAL: Trying to obtain current memory policy. 00:05:06.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.975 EAL: Restoring previous memory policy: 4 00:05:06.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.975 EAL: request: mp_malloc_sync 00:05:06.975 EAL: No shared files mode enabled, IPC is disabled 00:05:06.975 EAL: Heap on socket 0 was expanded by 258MB 00:05:06.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.975 EAL: request: mp_malloc_sync 00:05:06.975 EAL: No shared files mode enabled, IPC is disabled 00:05:06.975 EAL: Heap on socket 0 was shrunk by 258MB 00:05:06.975 EAL: Trying to obtain current memory policy. 00:05:06.975 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.975 EAL: Restoring previous memory policy: 4 00:05:06.975 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.975 EAL: request: mp_malloc_sync 00:05:06.975 EAL: No shared files mode enabled, IPC is disabled 00:05:06.975 EAL: Heap on socket 0 was expanded by 514MB 00:05:07.235 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.235 EAL: request: mp_malloc_sync 00:05:07.235 EAL: No shared files mode enabled, IPC is disabled 00:05:07.235 EAL: Heap on socket 0 was shrunk by 514MB 00:05:07.235 EAL: Trying to obtain current memory policy. 00:05:07.235 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.494 EAL: Restoring previous memory policy: 4 00:05:07.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.494 EAL: request: mp_malloc_sync 00:05:07.494 EAL: No shared files mode enabled, IPC is disabled 00:05:07.494 EAL: Heap on socket 0 was expanded by 1026MB 00:05:07.494 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.753 passed 00:05:07.753 00:05:07.753 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.753 suites 1 1 n/a 0 0 00:05:07.753 tests 2 2 2 0 0 00:05:07.753 asserts 5925 5925 5925 0 n/a 00:05:07.753 00:05:07.753 Elapsed time = 1.340 seconds 00:05:07.753 EAL: request: mp_malloc_sync 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:07.753 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.753 EAL: request: mp_malloc_sync 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: Heap on socket 0 was shrunk by 2MB 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 EAL: No shared files mode enabled, IPC is disabled 00:05:07.753 00:05:07.753 real 0m1.582s 00:05:07.753 user 0m0.749s 00:05:07.753 sys 0m0.698s 00:05:07.753 01:49:13 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:07.753 ************************************ 00:05:07.753 END TEST env_vtophys 00:05:07.753 ************************************ 00:05:07.753 01:49:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:07.753 01:49:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.753 01:49:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:07.753 01:49:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:07.753 01:49:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.753 ************************************ 00:05:07.753 START TEST env_pci 00:05:07.753 ************************************ 00:05:07.753 01:49:13 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:07.753 00:05:07.753 00:05:07.753 CUnit - A unit testing framework for C - Version 2.1-3 00:05:07.753 http://cunit.sourceforge.net/ 00:05:07.753 00:05:07.753 00:05:07.753 Suite: pci 00:05:07.753 Test: pci_hook ...[2024-12-07 01:49:13.210451] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68733 has claimed it 00:05:08.013 passed 00:05:08.013 00:05:08.013 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.013 suites 1 1 n/a 0 0 00:05:08.013 tests 1 1 1 0 0 00:05:08.013 asserts 25 25 25 0 n/a 00:05:08.013 00:05:08.013 Elapsed time = 0.007 seconds 00:05:08.013 EAL: Cannot find device (10000:00:01.0) 00:05:08.013 EAL: Failed to attach device on primary process 00:05:08.013 00:05:08.013 real 0m0.093s 00:05:08.013 user 0m0.039s 00:05:08.013 sys 0m0.053s 00:05:08.013 01:49:13 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.013 01:49:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:08.013 ************************************ 00:05:08.013 END TEST env_pci 00:05:08.013 ************************************ 00:05:08.013 01:49:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:08.013 01:49:13 env -- env/env.sh@15 -- # uname 00:05:08.013 01:49:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:08.013 01:49:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:08.013 01:49:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.013 01:49:13 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:08.013 01:49:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.013 01:49:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.013 ************************************ 00:05:08.013 START TEST env_dpdk_post_init 00:05:08.013 ************************************ 00:05:08.013 01:49:13 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:08.013 EAL: Detected CPU lcores: 10 00:05:08.013 EAL: Detected NUMA nodes: 1 00:05:08.013 EAL: Detected shared linkage of DPDK 00:05:08.013 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.013 EAL: Selected IOVA mode 'PA' 00:05:08.273 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:08.273 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:08.273 Starting DPDK initialization... 00:05:08.273 Starting SPDK post initialization... 00:05:08.273 SPDK NVMe probe 00:05:08.273 Attaching to 0000:00:10.0 00:05:08.273 Attaching to 0000:00:11.0 00:05:08.273 Attached to 0000:00:10.0 00:05:08.273 Attached to 0000:00:11.0 00:05:08.273 Cleaning up... 00:05:08.273 00:05:08.273 real 0m0.229s 00:05:08.273 user 0m0.063s 00:05:08.273 sys 0m0.066s 00:05:08.274 01:49:13 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.274 01:49:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.274 ************************************ 00:05:08.274 END TEST env_dpdk_post_init 00:05:08.274 ************************************ 00:05:08.274 01:49:13 env -- env/env.sh@26 -- # uname 00:05:08.274 01:49:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.274 01:49:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.274 01:49:13 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.274 01:49:13 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.274 01:49:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.274 ************************************ 00:05:08.274 START TEST env_mem_callbacks 00:05:08.274 ************************************ 00:05:08.274 01:49:13 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.274 EAL: Detected CPU lcores: 10 00:05:08.274 EAL: Detected NUMA nodes: 1 00:05:08.274 EAL: Detected shared linkage of DPDK 00:05:08.274 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.274 EAL: Selected IOVA mode 'PA' 00:05:08.533 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.533 00:05:08.533 00:05:08.533 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.533 http://cunit.sourceforge.net/ 00:05:08.533 00:05:08.533 00:05:08.533 Suite: memory 00:05:08.533 Test: test ... 00:05:08.533 register 0x200000200000 2097152 00:05:08.533 malloc 3145728 00:05:08.533 register 0x200000400000 4194304 00:05:08.533 buf 0x200000500000 len 3145728 PASSED 00:05:08.533 malloc 64 00:05:08.533 buf 0x2000004fff40 len 64 PASSED 00:05:08.533 malloc 4194304 00:05:08.533 register 0x200000800000 6291456 00:05:08.533 buf 0x200000a00000 len 4194304 PASSED 00:05:08.533 free 0x200000500000 3145728 00:05:08.533 free 0x2000004fff40 64 00:05:08.533 unregister 0x200000400000 4194304 PASSED 00:05:08.533 free 0x200000a00000 4194304 00:05:08.533 unregister 0x200000800000 6291456 PASSED 00:05:08.533 malloc 8388608 00:05:08.533 register 0x200000400000 10485760 00:05:08.533 buf 0x200000600000 len 8388608 PASSED 00:05:08.533 free 0x200000600000 8388608 00:05:08.533 unregister 0x200000400000 10485760 PASSED 00:05:08.533 passed 00:05:08.533 00:05:08.533 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.533 suites 1 1 n/a 0 0 00:05:08.533 tests 1 1 1 0 0 00:05:08.533 asserts 15 15 15 0 n/a 00:05:08.533 00:05:08.533 Elapsed time = 0.011 seconds 00:05:08.533 00:05:08.533 real 0m0.182s 00:05:08.533 user 0m0.032s 00:05:08.533 sys 0m0.049s 00:05:08.533 01:49:13 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.533 01:49:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.533 ************************************ 00:05:08.533 END TEST env_mem_callbacks 00:05:08.533 ************************************ 00:05:08.533 00:05:08.533 real 0m2.922s 00:05:08.533 user 0m1.340s 00:05:08.533 sys 0m1.260s 00:05:08.533 01:49:13 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.533 01:49:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.533 ************************************ 00:05:08.533 END TEST env 00:05:08.533 ************************************ 00:05:08.533 01:49:13 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.533 01:49:13 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:08.533 01:49:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:08.533 01:49:13 -- common/autotest_common.sh@10 -- # set +x 00:05:08.533 ************************************ 00:05:08.533 START TEST rpc 00:05:08.533 ************************************ 00:05:08.533 01:49:13 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:08.793 * Looking for test storage... 00:05:08.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.793 01:49:14 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.793 01:49:14 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.793 01:49:14 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.793 01:49:14 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.793 01:49:14 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.793 01:49:14 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.793 01:49:14 rpc -- scripts/common.sh@345 -- # : 1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.793 01:49:14 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.793 01:49:14 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.793 01:49:14 rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.793 01:49:14 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.793 01:49:14 rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.793 01:49:14 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.793 01:49:14 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.793 01:49:14 rpc -- scripts/common.sh@368 -- # return 0 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.793 --rc genhtml_branch_coverage=1 00:05:08.793 --rc genhtml_function_coverage=1 00:05:08.793 --rc genhtml_legend=1 00:05:08.793 --rc geninfo_all_blocks=1 00:05:08.793 --rc geninfo_unexecuted_blocks=1 00:05:08.793 00:05:08.793 ' 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.793 --rc genhtml_branch_coverage=1 00:05:08.793 --rc genhtml_function_coverage=1 00:05:08.793 --rc genhtml_legend=1 00:05:08.793 --rc geninfo_all_blocks=1 00:05:08.793 --rc geninfo_unexecuted_blocks=1 00:05:08.793 00:05:08.793 ' 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.793 --rc genhtml_branch_coverage=1 00:05:08.793 --rc genhtml_function_coverage=1 00:05:08.793 --rc genhtml_legend=1 00:05:08.793 --rc geninfo_all_blocks=1 00:05:08.793 --rc geninfo_unexecuted_blocks=1 00:05:08.793 00:05:08.793 ' 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.793 --rc genhtml_branch_coverage=1 00:05:08.793 --rc genhtml_function_coverage=1 00:05:08.793 --rc genhtml_legend=1 00:05:08.793 --rc geninfo_all_blocks=1 00:05:08.793 --rc geninfo_unexecuted_blocks=1 00:05:08.793 00:05:08.793 ' 00:05:08.793 01:49:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68860 00:05:08.793 01:49:14 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:08.793 01:49:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.793 01:49:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68860 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@831 -- # '[' -z 68860 ']' 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.793 01:49:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.052 [2024-12-07 01:49:14.263345] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:09.052 [2024-12-07 01:49:14.263487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68860 ] 00:05:09.052 [2024-12-07 01:49:14.411168] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.052 [2024-12-07 01:49:14.457940] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.052 [2024-12-07 01:49:14.457989] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68860' to capture a snapshot of events at runtime. 00:05:09.052 [2024-12-07 01:49:14.458017] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.052 [2024-12-07 01:49:14.458025] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.052 [2024-12-07 01:49:14.458037] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68860 for offline analysis/debug. 00:05:09.052 [2024-12-07 01:49:14.458092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.620 01:49:15 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.620 01:49:15 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:09.620 01:49:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.620 01:49:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.620 01:49:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.620 01:49:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.620 01:49:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.620 01:49:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.620 01:49:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.878 ************************************ 00:05:09.878 START TEST rpc_integrity 00:05:09.878 ************************************ 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.878 { 00:05:09.878 "name": "Malloc0", 00:05:09.878 "aliases": [ 00:05:09.878 "1f6ee54f-ec7e-4fa6-a974-e071b4640fd3" 00:05:09.878 ], 00:05:09.878 "product_name": "Malloc disk", 00:05:09.878 "block_size": 512, 00:05:09.878 "num_blocks": 16384, 00:05:09.878 "uuid": "1f6ee54f-ec7e-4fa6-a974-e071b4640fd3", 00:05:09.878 "assigned_rate_limits": { 00:05:09.878 "rw_ios_per_sec": 0, 00:05:09.878 "rw_mbytes_per_sec": 0, 00:05:09.878 "r_mbytes_per_sec": 0, 00:05:09.878 "w_mbytes_per_sec": 0 00:05:09.878 }, 00:05:09.878 "claimed": false, 00:05:09.878 "zoned": false, 00:05:09.878 "supported_io_types": { 00:05:09.878 "read": true, 00:05:09.878 "write": true, 00:05:09.878 "unmap": true, 00:05:09.878 "flush": true, 00:05:09.878 "reset": true, 00:05:09.878 "nvme_admin": false, 00:05:09.878 "nvme_io": false, 00:05:09.878 "nvme_io_md": false, 00:05:09.878 "write_zeroes": true, 00:05:09.878 "zcopy": true, 00:05:09.878 "get_zone_info": false, 00:05:09.878 "zone_management": false, 00:05:09.878 "zone_append": false, 00:05:09.878 "compare": false, 00:05:09.878 "compare_and_write": false, 00:05:09.878 "abort": true, 00:05:09.878 "seek_hole": false, 00:05:09.878 "seek_data": false, 00:05:09.878 "copy": true, 00:05:09.878 "nvme_iov_md": false 00:05:09.878 }, 00:05:09.878 "memory_domains": [ 00:05:09.878 { 00:05:09.878 "dma_device_id": "system", 00:05:09.878 "dma_device_type": 1 00:05:09.878 }, 00:05:09.878 { 00:05:09.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.878 "dma_device_type": 2 00:05:09.878 } 00:05:09.878 ], 00:05:09.878 "driver_specific": {} 00:05:09.878 } 00:05:09.878 ]' 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.878 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.878 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.878 [2024-12-07 01:49:15.279059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.878 [2024-12-07 01:49:15.279138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.878 [2024-12-07 01:49:15.279172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:09.878 [2024-12-07 01:49:15.279182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.878 [2024-12-07 01:49:15.281766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.878 [2024-12-07 01:49:15.281813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.878 Passthru0 00:05:09.879 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.879 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.879 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.879 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.879 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.879 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.879 { 00:05:09.879 "name": "Malloc0", 00:05:09.879 "aliases": [ 00:05:09.879 "1f6ee54f-ec7e-4fa6-a974-e071b4640fd3" 00:05:09.879 ], 00:05:09.879 "product_name": "Malloc disk", 00:05:09.879 "block_size": 512, 00:05:09.879 "num_blocks": 16384, 00:05:09.879 "uuid": "1f6ee54f-ec7e-4fa6-a974-e071b4640fd3", 00:05:09.879 "assigned_rate_limits": { 00:05:09.879 "rw_ios_per_sec": 0, 00:05:09.879 "rw_mbytes_per_sec": 0, 00:05:09.879 "r_mbytes_per_sec": 0, 00:05:09.879 "w_mbytes_per_sec": 0 00:05:09.879 }, 00:05:09.879 "claimed": true, 00:05:09.879 "claim_type": "exclusive_write", 00:05:09.879 "zoned": false, 00:05:09.879 "supported_io_types": { 00:05:09.879 "read": true, 00:05:09.879 "write": true, 00:05:09.879 "unmap": true, 00:05:09.879 "flush": true, 00:05:09.879 "reset": true, 00:05:09.879 "nvme_admin": false, 00:05:09.879 "nvme_io": false, 00:05:09.879 "nvme_io_md": false, 00:05:09.879 "write_zeroes": true, 00:05:09.879 "zcopy": true, 00:05:09.879 "get_zone_info": false, 00:05:09.879 "zone_management": false, 00:05:09.879 "zone_append": false, 00:05:09.879 "compare": false, 00:05:09.879 "compare_and_write": false, 00:05:09.879 "abort": true, 00:05:09.879 "seek_hole": false, 00:05:09.879 "seek_data": false, 00:05:09.879 "copy": true, 00:05:09.879 "nvme_iov_md": false 00:05:09.879 }, 00:05:09.879 "memory_domains": [ 00:05:09.879 { 00:05:09.879 "dma_device_id": "system", 00:05:09.879 "dma_device_type": 1 00:05:09.879 }, 00:05:09.879 { 00:05:09.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.879 "dma_device_type": 2 00:05:09.879 } 00:05:09.879 ], 00:05:09.879 "driver_specific": {} 00:05:09.879 }, 00:05:09.879 { 00:05:09.879 "name": "Passthru0", 00:05:09.879 "aliases": [ 00:05:09.879 "6bd5be5d-cba3-5d50-88d6-48097530b931" 00:05:09.879 ], 00:05:09.879 "product_name": "passthru", 00:05:09.879 "block_size": 512, 00:05:09.879 "num_blocks": 16384, 00:05:09.879 "uuid": "6bd5be5d-cba3-5d50-88d6-48097530b931", 00:05:09.879 "assigned_rate_limits": { 00:05:09.879 "rw_ios_per_sec": 0, 00:05:09.879 "rw_mbytes_per_sec": 0, 00:05:09.879 "r_mbytes_per_sec": 0, 00:05:09.879 "w_mbytes_per_sec": 0 00:05:09.879 }, 00:05:09.879 "claimed": false, 00:05:09.879 "zoned": false, 00:05:09.879 "supported_io_types": { 00:05:09.879 "read": true, 00:05:09.879 "write": true, 00:05:09.879 "unmap": true, 00:05:09.879 "flush": true, 00:05:09.879 "reset": true, 00:05:09.879 "nvme_admin": false, 00:05:09.879 "nvme_io": false, 00:05:09.879 "nvme_io_md": false, 00:05:09.879 "write_zeroes": true, 00:05:09.879 "zcopy": true, 00:05:09.879 "get_zone_info": false, 00:05:09.879 "zone_management": false, 00:05:09.879 "zone_append": false, 00:05:09.879 "compare": false, 00:05:09.879 "compare_and_write": false, 00:05:09.879 "abort": true, 00:05:09.879 "seek_hole": false, 00:05:09.879 "seek_data": false, 00:05:09.879 "copy": true, 00:05:09.879 "nvme_iov_md": false 00:05:09.879 }, 00:05:09.879 "memory_domains": [ 00:05:09.879 { 00:05:09.879 "dma_device_id": "system", 00:05:09.879 "dma_device_type": 1 00:05:09.879 }, 00:05:09.879 { 00:05:09.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.879 "dma_device_type": 2 00:05:09.879 } 00:05:09.879 ], 00:05:09.879 "driver_specific": { 00:05:09.879 "passthru": { 00:05:09.879 "name": "Passthru0", 00:05:09.879 "base_bdev_name": "Malloc0" 00:05:09.879 } 00:05:09.879 } 00:05:09.879 } 00:05:09.879 ]' 00:05:09.879 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.144 01:49:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.144 00:05:10.144 real 0m0.358s 00:05:10.144 user 0m0.210s 00:05:10.144 sys 0m0.057s 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.144 01:49:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 ************************************ 00:05:10.144 END TEST rpc_integrity 00:05:10.144 ************************************ 00:05:10.144 01:49:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:10.144 01:49:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.144 01:49:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.144 01:49:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 ************************************ 00:05:10.144 START TEST rpc_plugins 00:05:10.144 ************************************ 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:10.144 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.144 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:10.144 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.144 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.144 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:10.144 { 00:05:10.144 "name": "Malloc1", 00:05:10.144 "aliases": [ 00:05:10.144 "cee021a2-84da-4a67-b790-6d09a679ed04" 00:05:10.144 ], 00:05:10.144 "product_name": "Malloc disk", 00:05:10.144 "block_size": 4096, 00:05:10.144 "num_blocks": 256, 00:05:10.144 "uuid": "cee021a2-84da-4a67-b790-6d09a679ed04", 00:05:10.144 "assigned_rate_limits": { 00:05:10.144 "rw_ios_per_sec": 0, 00:05:10.144 "rw_mbytes_per_sec": 0, 00:05:10.144 "r_mbytes_per_sec": 0, 00:05:10.144 "w_mbytes_per_sec": 0 00:05:10.144 }, 00:05:10.144 "claimed": false, 00:05:10.144 "zoned": false, 00:05:10.144 "supported_io_types": { 00:05:10.144 "read": true, 00:05:10.144 "write": true, 00:05:10.144 "unmap": true, 00:05:10.144 "flush": true, 00:05:10.144 "reset": true, 00:05:10.144 "nvme_admin": false, 00:05:10.144 "nvme_io": false, 00:05:10.144 "nvme_io_md": false, 00:05:10.144 "write_zeroes": true, 00:05:10.144 "zcopy": true, 00:05:10.144 "get_zone_info": false, 00:05:10.144 "zone_management": false, 00:05:10.144 "zone_append": false, 00:05:10.144 "compare": false, 00:05:10.144 "compare_and_write": false, 00:05:10.144 "abort": true, 00:05:10.144 "seek_hole": false, 00:05:10.144 "seek_data": false, 00:05:10.144 "copy": true, 00:05:10.144 "nvme_iov_md": false 00:05:10.144 }, 00:05:10.144 "memory_domains": [ 00:05:10.144 { 00:05:10.144 "dma_device_id": "system", 00:05:10.144 "dma_device_type": 1 00:05:10.144 }, 00:05:10.144 { 00:05:10.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.144 "dma_device_type": 2 00:05:10.144 } 00:05:10.144 ], 00:05:10.144 "driver_specific": {} 00:05:10.144 } 00:05:10.144 ]' 00:05:10.144 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:10.411 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:10.411 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.411 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.411 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:10.411 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:10.411 01:49:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:10.411 00:05:10.411 real 0m0.164s 00:05:10.411 user 0m0.093s 00:05:10.411 sys 0m0.030s 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.411 01:49:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:10.411 ************************************ 00:05:10.411 END TEST rpc_plugins 00:05:10.411 ************************************ 00:05:10.411 01:49:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.411 01:49:15 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.411 01:49:15 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.411 01:49:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.411 ************************************ 00:05:10.411 START TEST rpc_trace_cmd_test 00:05:10.411 ************************************ 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.411 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68860", 00:05:10.411 "tpoint_group_mask": "0x8", 00:05:10.411 "iscsi_conn": { 00:05:10.411 "mask": "0x2", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "scsi": { 00:05:10.411 "mask": "0x4", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "bdev": { 00:05:10.411 "mask": "0x8", 00:05:10.411 "tpoint_mask": "0xffffffffffffffff" 00:05:10.411 }, 00:05:10.411 "nvmf_rdma": { 00:05:10.411 "mask": "0x10", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "nvmf_tcp": { 00:05:10.411 "mask": "0x20", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "ftl": { 00:05:10.411 "mask": "0x40", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "blobfs": { 00:05:10.411 "mask": "0x80", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "dsa": { 00:05:10.411 "mask": "0x200", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "thread": { 00:05:10.411 "mask": "0x400", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "nvme_pcie": { 00:05:10.411 "mask": "0x800", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "iaa": { 00:05:10.411 "mask": "0x1000", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "nvme_tcp": { 00:05:10.411 "mask": "0x2000", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "bdev_nvme": { 00:05:10.411 "mask": "0x4000", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "sock": { 00:05:10.411 "mask": "0x8000", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "blob": { 00:05:10.411 "mask": "0x10000", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 }, 00:05:10.411 "bdev_raid": { 00:05:10.411 "mask": "0x20000", 00:05:10.411 "tpoint_mask": "0x0" 00:05:10.411 } 00:05:10.411 }' 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.411 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.670 00:05:10.670 real 0m0.267s 00:05:10.670 user 0m0.216s 00:05:10.670 sys 0m0.040s 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.670 01:49:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.670 ************************************ 00:05:10.670 END TEST rpc_trace_cmd_test 00:05:10.670 ************************************ 00:05:10.670 01:49:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:10.670 01:49:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.670 01:49:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.670 01:49:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.670 01:49:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.670 01:49:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.670 ************************************ 00:05:10.671 START TEST rpc_daemon_integrity 00:05:10.671 ************************************ 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.671 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:10.930 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.930 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.930 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.930 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.930 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.930 { 00:05:10.930 "name": "Malloc2", 00:05:10.930 "aliases": [ 00:05:10.930 "40ff2ddc-c8f0-464c-a1cc-63e3f443f8ea" 00:05:10.930 ], 00:05:10.930 "product_name": "Malloc disk", 00:05:10.930 "block_size": 512, 00:05:10.930 "num_blocks": 16384, 00:05:10.930 "uuid": "40ff2ddc-c8f0-464c-a1cc-63e3f443f8ea", 00:05:10.930 "assigned_rate_limits": { 00:05:10.930 "rw_ios_per_sec": 0, 00:05:10.930 "rw_mbytes_per_sec": 0, 00:05:10.930 "r_mbytes_per_sec": 0, 00:05:10.930 "w_mbytes_per_sec": 0 00:05:10.930 }, 00:05:10.930 "claimed": false, 00:05:10.930 "zoned": false, 00:05:10.930 "supported_io_types": { 00:05:10.930 "read": true, 00:05:10.930 "write": true, 00:05:10.930 "unmap": true, 00:05:10.930 "flush": true, 00:05:10.930 "reset": true, 00:05:10.930 "nvme_admin": false, 00:05:10.930 "nvme_io": false, 00:05:10.930 "nvme_io_md": false, 00:05:10.930 "write_zeroes": true, 00:05:10.930 "zcopy": true, 00:05:10.931 "get_zone_info": false, 00:05:10.931 "zone_management": false, 00:05:10.931 "zone_append": false, 00:05:10.931 "compare": false, 00:05:10.931 "compare_and_write": false, 00:05:10.931 "abort": true, 00:05:10.931 "seek_hole": false, 00:05:10.931 "seek_data": false, 00:05:10.931 "copy": true, 00:05:10.931 "nvme_iov_md": false 00:05:10.931 }, 00:05:10.931 "memory_domains": [ 00:05:10.931 { 00:05:10.931 "dma_device_id": "system", 00:05:10.931 "dma_device_type": 1 00:05:10.931 }, 00:05:10.931 { 00:05:10.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.931 "dma_device_type": 2 00:05:10.931 } 00:05:10.931 ], 00:05:10.931 "driver_specific": {} 00:05:10.931 } 00:05:10.931 ]' 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.931 [2024-12-07 01:49:16.194623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:10.931 [2024-12-07 01:49:16.194710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.931 [2024-12-07 01:49:16.194738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:10.931 [2024-12-07 01:49:16.194750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.931 [2024-12-07 01:49:16.197387] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.931 [2024-12-07 01:49:16.197426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.931 Passthru0 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.931 { 00:05:10.931 "name": "Malloc2", 00:05:10.931 "aliases": [ 00:05:10.931 "40ff2ddc-c8f0-464c-a1cc-63e3f443f8ea" 00:05:10.931 ], 00:05:10.931 "product_name": "Malloc disk", 00:05:10.931 "block_size": 512, 00:05:10.931 "num_blocks": 16384, 00:05:10.931 "uuid": "40ff2ddc-c8f0-464c-a1cc-63e3f443f8ea", 00:05:10.931 "assigned_rate_limits": { 00:05:10.931 "rw_ios_per_sec": 0, 00:05:10.931 "rw_mbytes_per_sec": 0, 00:05:10.931 "r_mbytes_per_sec": 0, 00:05:10.931 "w_mbytes_per_sec": 0 00:05:10.931 }, 00:05:10.931 "claimed": true, 00:05:10.931 "claim_type": "exclusive_write", 00:05:10.931 "zoned": false, 00:05:10.931 "supported_io_types": { 00:05:10.931 "read": true, 00:05:10.931 "write": true, 00:05:10.931 "unmap": true, 00:05:10.931 "flush": true, 00:05:10.931 "reset": true, 00:05:10.931 "nvme_admin": false, 00:05:10.931 "nvme_io": false, 00:05:10.931 "nvme_io_md": false, 00:05:10.931 "write_zeroes": true, 00:05:10.931 "zcopy": true, 00:05:10.931 "get_zone_info": false, 00:05:10.931 "zone_management": false, 00:05:10.931 "zone_append": false, 00:05:10.931 "compare": false, 00:05:10.931 "compare_and_write": false, 00:05:10.931 "abort": true, 00:05:10.931 "seek_hole": false, 00:05:10.931 "seek_data": false, 00:05:10.931 "copy": true, 00:05:10.931 "nvme_iov_md": false 00:05:10.931 }, 00:05:10.931 "memory_domains": [ 00:05:10.931 { 00:05:10.931 "dma_device_id": "system", 00:05:10.931 "dma_device_type": 1 00:05:10.931 }, 00:05:10.931 { 00:05:10.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.931 "dma_device_type": 2 00:05:10.931 } 00:05:10.931 ], 00:05:10.931 "driver_specific": {} 00:05:10.931 }, 00:05:10.931 { 00:05:10.931 "name": "Passthru0", 00:05:10.931 "aliases": [ 00:05:10.931 "746ccd0e-ffe7-5702-a44f-03723eaedf42" 00:05:10.931 ], 00:05:10.931 "product_name": "passthru", 00:05:10.931 "block_size": 512, 00:05:10.931 "num_blocks": 16384, 00:05:10.931 "uuid": "746ccd0e-ffe7-5702-a44f-03723eaedf42", 00:05:10.931 "assigned_rate_limits": { 00:05:10.931 "rw_ios_per_sec": 0, 00:05:10.931 "rw_mbytes_per_sec": 0, 00:05:10.931 "r_mbytes_per_sec": 0, 00:05:10.931 "w_mbytes_per_sec": 0 00:05:10.931 }, 00:05:10.931 "claimed": false, 00:05:10.931 "zoned": false, 00:05:10.931 "supported_io_types": { 00:05:10.931 "read": true, 00:05:10.931 "write": true, 00:05:10.931 "unmap": true, 00:05:10.931 "flush": true, 00:05:10.931 "reset": true, 00:05:10.931 "nvme_admin": false, 00:05:10.931 "nvme_io": false, 00:05:10.931 "nvme_io_md": false, 00:05:10.931 "write_zeroes": true, 00:05:10.931 "zcopy": true, 00:05:10.931 "get_zone_info": false, 00:05:10.931 "zone_management": false, 00:05:10.931 "zone_append": false, 00:05:10.931 "compare": false, 00:05:10.931 "compare_and_write": false, 00:05:10.931 "abort": true, 00:05:10.931 "seek_hole": false, 00:05:10.931 "seek_data": false, 00:05:10.931 "copy": true, 00:05:10.931 "nvme_iov_md": false 00:05:10.931 }, 00:05:10.931 "memory_domains": [ 00:05:10.931 { 00:05:10.931 "dma_device_id": "system", 00:05:10.931 "dma_device_type": 1 00:05:10.931 }, 00:05:10.931 { 00:05:10.931 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.931 "dma_device_type": 2 00:05:10.931 } 00:05:10.931 ], 00:05:10.931 "driver_specific": { 00:05:10.931 "passthru": { 00:05:10.931 "name": "Passthru0", 00:05:10.931 "base_bdev_name": "Malloc2" 00:05:10.931 } 00:05:10.931 } 00:05:10.931 } 00:05:10.931 ]' 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.931 00:05:10.931 real 0m0.301s 00:05:10.931 user 0m0.176s 00:05:10.931 sys 0m0.052s 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.931 01:49:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.931 ************************************ 00:05:10.931 END TEST rpc_daemon_integrity 00:05:10.931 ************************************ 00:05:11.191 01:49:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:11.191 01:49:16 rpc -- rpc/rpc.sh@84 -- # killprocess 68860 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@950 -- # '[' -z 68860 ']' 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@954 -- # kill -0 68860 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@955 -- # uname 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68860 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.191 killing process with pid 68860 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68860' 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@969 -- # kill 68860 00:05:11.191 01:49:16 rpc -- common/autotest_common.sh@974 -- # wait 68860 00:05:11.451 00:05:11.451 real 0m2.889s 00:05:11.451 user 0m3.491s 00:05:11.451 sys 0m0.835s 00:05:11.451 01:49:16 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.451 01:49:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.451 ************************************ 00:05:11.451 END TEST rpc 00:05:11.451 ************************************ 00:05:11.451 01:49:16 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.451 01:49:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.451 01:49:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.451 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:11.451 ************************************ 00:05:11.451 START TEST skip_rpc 00:05:11.451 ************************************ 00:05:11.451 01:49:16 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:11.711 * Looking for test storage... 00:05:11.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.711 01:49:17 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:11.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.711 --rc genhtml_branch_coverage=1 00:05:11.711 --rc genhtml_function_coverage=1 00:05:11.711 --rc genhtml_legend=1 00:05:11.711 --rc geninfo_all_blocks=1 00:05:11.711 --rc geninfo_unexecuted_blocks=1 00:05:11.711 00:05:11.711 ' 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:11.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.711 --rc genhtml_branch_coverage=1 00:05:11.711 --rc genhtml_function_coverage=1 00:05:11.711 --rc genhtml_legend=1 00:05:11.711 --rc geninfo_all_blocks=1 00:05:11.711 --rc geninfo_unexecuted_blocks=1 00:05:11.711 00:05:11.711 ' 00:05:11.711 01:49:17 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:11.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.711 --rc genhtml_branch_coverage=1 00:05:11.712 --rc genhtml_function_coverage=1 00:05:11.712 --rc genhtml_legend=1 00:05:11.712 --rc geninfo_all_blocks=1 00:05:11.712 --rc geninfo_unexecuted_blocks=1 00:05:11.712 00:05:11.712 ' 00:05:11.712 01:49:17 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:11.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.712 --rc genhtml_branch_coverage=1 00:05:11.712 --rc genhtml_function_coverage=1 00:05:11.712 --rc genhtml_legend=1 00:05:11.712 --rc geninfo_all_blocks=1 00:05:11.712 --rc geninfo_unexecuted_blocks=1 00:05:11.712 00:05:11.712 ' 00:05:11.712 01:49:17 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.712 01:49:17 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:11.712 01:49:17 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.712 01:49:17 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.712 01:49:17 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.712 01:49:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.712 ************************************ 00:05:11.712 START TEST skip_rpc 00:05:11.712 ************************************ 00:05:11.712 01:49:17 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:11.712 01:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69062 00:05:11.712 01:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.712 01:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.712 01:49:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.970 [2024-12-07 01:49:17.228242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:11.971 [2024-12-07 01:49:17.228373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69062 ] 00:05:11.971 [2024-12-07 01:49:17.376935] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.971 [2024-12-07 01:49:17.429352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69062 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69062 ']' 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69062 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69062 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.247 killing process with pid 69062 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69062' 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69062 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69062 00:05:17.247 00:05:17.247 real 0m5.449s 00:05:17.247 user 0m5.045s 00:05:17.247 sys 0m0.330s 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.247 01:49:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.247 ************************************ 00:05:17.247 END TEST skip_rpc 00:05:17.247 ************************************ 00:05:17.247 01:49:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:17.247 01:49:22 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:17.247 01:49:22 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.247 01:49:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.247 ************************************ 00:05:17.247 START TEST skip_rpc_with_json 00:05:17.247 ************************************ 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69149 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69149 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69149 ']' 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:17.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:17.247 01:49:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.507 [2024-12-07 01:49:22.741605] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:17.507 [2024-12-07 01:49:22.742155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69149 ] 00:05:17.507 [2024-12-07 01:49:22.889107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.507 [2024-12-07 01:49:22.937787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.445 [2024-12-07 01:49:23.574945] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:18.445 request: 00:05:18.445 { 00:05:18.445 "trtype": "tcp", 00:05:18.445 "method": "nvmf_get_transports", 00:05:18.445 "req_id": 1 00:05:18.445 } 00:05:18.445 Got JSON-RPC error response 00:05:18.445 response: 00:05:18.445 { 00:05:18.445 "code": -19, 00:05:18.445 "message": "No such device" 00:05:18.445 } 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.445 [2024-12-07 01:49:23.587049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.445 01:49:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.445 { 00:05:18.445 "subsystems": [ 00:05:18.445 { 00:05:18.445 "subsystem": "fsdev", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "fsdev_set_opts", 00:05:18.445 "params": { 00:05:18.445 "fsdev_io_pool_size": 65535, 00:05:18.445 "fsdev_io_cache_size": 256 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "keyring", 00:05:18.445 "config": [] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "iobuf", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "iobuf_set_options", 00:05:18.445 "params": { 00:05:18.445 "small_pool_count": 8192, 00:05:18.445 "large_pool_count": 1024, 00:05:18.445 "small_bufsize": 8192, 00:05:18.445 "large_bufsize": 135168 00:05:18.445 } 00:05:18.445 } 00:05:18.445 ] 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "subsystem": "sock", 00:05:18.445 "config": [ 00:05:18.445 { 00:05:18.445 "method": "sock_set_default_impl", 00:05:18.445 "params": { 00:05:18.445 "impl_name": "posix" 00:05:18.445 } 00:05:18.445 }, 00:05:18.445 { 00:05:18.445 "method": "sock_impl_set_options", 00:05:18.445 "params": { 00:05:18.445 "impl_name": "ssl", 00:05:18.445 "recv_buf_size": 4096, 00:05:18.445 "send_buf_size": 4096, 00:05:18.446 "enable_recv_pipe": true, 00:05:18.446 "enable_quickack": false, 00:05:18.446 "enable_placement_id": 0, 00:05:18.446 "enable_zerocopy_send_server": true, 00:05:18.446 "enable_zerocopy_send_client": false, 00:05:18.446 "zerocopy_threshold": 0, 00:05:18.446 "tls_version": 0, 00:05:18.446 "enable_ktls": false 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "sock_impl_set_options", 00:05:18.446 "params": { 00:05:18.446 "impl_name": "posix", 00:05:18.446 "recv_buf_size": 2097152, 00:05:18.446 "send_buf_size": 2097152, 00:05:18.446 "enable_recv_pipe": true, 00:05:18.446 "enable_quickack": false, 00:05:18.446 "enable_placement_id": 0, 00:05:18.446 "enable_zerocopy_send_server": true, 00:05:18.446 "enable_zerocopy_send_client": false, 00:05:18.446 "zerocopy_threshold": 0, 00:05:18.446 "tls_version": 0, 00:05:18.446 "enable_ktls": false 00:05:18.446 } 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "vmd", 00:05:18.446 "config": [] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "accel", 00:05:18.446 "config": [ 00:05:18.446 { 00:05:18.446 "method": "accel_set_options", 00:05:18.446 "params": { 00:05:18.446 "small_cache_size": 128, 00:05:18.446 "large_cache_size": 16, 00:05:18.446 "task_count": 2048, 00:05:18.446 "sequence_count": 2048, 00:05:18.446 "buf_count": 2048 00:05:18.446 } 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "bdev", 00:05:18.446 "config": [ 00:05:18.446 { 00:05:18.446 "method": "bdev_set_options", 00:05:18.446 "params": { 00:05:18.446 "bdev_io_pool_size": 65535, 00:05:18.446 "bdev_io_cache_size": 256, 00:05:18.446 "bdev_auto_examine": true, 00:05:18.446 "iobuf_small_cache_size": 128, 00:05:18.446 "iobuf_large_cache_size": 16 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "bdev_raid_set_options", 00:05:18.446 "params": { 00:05:18.446 "process_window_size_kb": 1024, 00:05:18.446 "process_max_bandwidth_mb_sec": 0 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "bdev_iscsi_set_options", 00:05:18.446 "params": { 00:05:18.446 "timeout_sec": 30 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "bdev_nvme_set_options", 00:05:18.446 "params": { 00:05:18.446 "action_on_timeout": "none", 00:05:18.446 "timeout_us": 0, 00:05:18.446 "timeout_admin_us": 0, 00:05:18.446 "keep_alive_timeout_ms": 10000, 00:05:18.446 "arbitration_burst": 0, 00:05:18.446 "low_priority_weight": 0, 00:05:18.446 "medium_priority_weight": 0, 00:05:18.446 "high_priority_weight": 0, 00:05:18.446 "nvme_adminq_poll_period_us": 10000, 00:05:18.446 "nvme_ioq_poll_period_us": 0, 00:05:18.446 "io_queue_requests": 0, 00:05:18.446 "delay_cmd_submit": true, 00:05:18.446 "transport_retry_count": 4, 00:05:18.446 "bdev_retry_count": 3, 00:05:18.446 "transport_ack_timeout": 0, 00:05:18.446 "ctrlr_loss_timeout_sec": 0, 00:05:18.446 "reconnect_delay_sec": 0, 00:05:18.446 "fast_io_fail_timeout_sec": 0, 00:05:18.446 "disable_auto_failback": false, 00:05:18.446 "generate_uuids": false, 00:05:18.446 "transport_tos": 0, 00:05:18.446 "nvme_error_stat": false, 00:05:18.446 "rdma_srq_size": 0, 00:05:18.446 "io_path_stat": false, 00:05:18.446 "allow_accel_sequence": false, 00:05:18.446 "rdma_max_cq_size": 0, 00:05:18.446 "rdma_cm_event_timeout_ms": 0, 00:05:18.446 "dhchap_digests": [ 00:05:18.446 "sha256", 00:05:18.446 "sha384", 00:05:18.446 "sha512" 00:05:18.446 ], 00:05:18.446 "dhchap_dhgroups": [ 00:05:18.446 "null", 00:05:18.446 "ffdhe2048", 00:05:18.446 "ffdhe3072", 00:05:18.446 "ffdhe4096", 00:05:18.446 "ffdhe6144", 00:05:18.446 "ffdhe8192" 00:05:18.446 ] 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "bdev_nvme_set_hotplug", 00:05:18.446 "params": { 00:05:18.446 "period_us": 100000, 00:05:18.446 "enable": false 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "bdev_wait_for_examine" 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "scsi", 00:05:18.446 "config": null 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "scheduler", 00:05:18.446 "config": [ 00:05:18.446 { 00:05:18.446 "method": "framework_set_scheduler", 00:05:18.446 "params": { 00:05:18.446 "name": "static" 00:05:18.446 } 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "vhost_scsi", 00:05:18.446 "config": [] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "vhost_blk", 00:05:18.446 "config": [] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "ublk", 00:05:18.446 "config": [] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "nbd", 00:05:18.446 "config": [] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "nvmf", 00:05:18.446 "config": [ 00:05:18.446 { 00:05:18.446 "method": "nvmf_set_config", 00:05:18.446 "params": { 00:05:18.446 "discovery_filter": "match_any", 00:05:18.446 "admin_cmd_passthru": { 00:05:18.446 "identify_ctrlr": false 00:05:18.446 }, 00:05:18.446 "dhchap_digests": [ 00:05:18.446 "sha256", 00:05:18.446 "sha384", 00:05:18.446 "sha512" 00:05:18.446 ], 00:05:18.446 "dhchap_dhgroups": [ 00:05:18.446 "null", 00:05:18.446 "ffdhe2048", 00:05:18.446 "ffdhe3072", 00:05:18.446 "ffdhe4096", 00:05:18.446 "ffdhe6144", 00:05:18.446 "ffdhe8192" 00:05:18.446 ] 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "nvmf_set_max_subsystems", 00:05:18.446 "params": { 00:05:18.446 "max_subsystems": 1024 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "nvmf_set_crdt", 00:05:18.446 "params": { 00:05:18.446 "crdt1": 0, 00:05:18.446 "crdt2": 0, 00:05:18.446 "crdt3": 0 00:05:18.446 } 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "method": "nvmf_create_transport", 00:05:18.446 "params": { 00:05:18.446 "trtype": "TCP", 00:05:18.446 "max_queue_depth": 128, 00:05:18.446 "max_io_qpairs_per_ctrlr": 127, 00:05:18.446 "in_capsule_data_size": 4096, 00:05:18.446 "max_io_size": 131072, 00:05:18.446 "io_unit_size": 131072, 00:05:18.446 "max_aq_depth": 128, 00:05:18.446 "num_shared_buffers": 511, 00:05:18.446 "buf_cache_size": 4294967295, 00:05:18.446 "dif_insert_or_strip": false, 00:05:18.446 "zcopy": false, 00:05:18.446 "c2h_success": true, 00:05:18.446 "sock_priority": 0, 00:05:18.446 "abort_timeout_sec": 1, 00:05:18.446 "ack_timeout": 0, 00:05:18.446 "data_wr_pool_size": 0 00:05:18.446 } 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 }, 00:05:18.446 { 00:05:18.446 "subsystem": "iscsi", 00:05:18.446 "config": [ 00:05:18.446 { 00:05:18.446 "method": "iscsi_set_options", 00:05:18.446 "params": { 00:05:18.446 "node_base": "iqn.2016-06.io.spdk", 00:05:18.446 "max_sessions": 128, 00:05:18.446 "max_connections_per_session": 2, 00:05:18.446 "max_queue_depth": 64, 00:05:18.446 "default_time2wait": 2, 00:05:18.446 "default_time2retain": 20, 00:05:18.446 "first_burst_length": 8192, 00:05:18.446 "immediate_data": true, 00:05:18.446 "allow_duplicated_isid": false, 00:05:18.446 "error_recovery_level": 0, 00:05:18.446 "nop_timeout": 60, 00:05:18.446 "nop_in_interval": 30, 00:05:18.446 "disable_chap": false, 00:05:18.446 "require_chap": false, 00:05:18.446 "mutual_chap": false, 00:05:18.446 "chap_group": 0, 00:05:18.446 "max_large_datain_per_connection": 64, 00:05:18.446 "max_r2t_per_connection": 4, 00:05:18.446 "pdu_pool_size": 36864, 00:05:18.446 "immediate_data_pool_size": 16384, 00:05:18.446 "data_out_pool_size": 2048 00:05:18.446 } 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 } 00:05:18.446 ] 00:05:18.446 } 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69149 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69149 ']' 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69149 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69149 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.446 killing process with pid 69149 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69149' 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69149 00:05:18.446 01:49:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69149 00:05:19.015 01:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69178 00:05:19.015 01:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.015 01:49:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:24.304 01:49:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69178 00:05:24.304 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69178 ']' 00:05:24.304 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69178 00:05:24.304 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:24.304 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.304 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69178 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.305 killing process with pid 69178 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69178' 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69178 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69178 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:24.305 00:05:24.305 real 0m6.992s 00:05:24.305 user 0m6.577s 00:05:24.305 sys 0m0.713s 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.305 ************************************ 00:05:24.305 END TEST skip_rpc_with_json 00:05:24.305 ************************************ 00:05:24.305 01:49:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:24.305 01:49:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.305 01:49:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.305 01:49:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.305 ************************************ 00:05:24.305 START TEST skip_rpc_with_delay 00:05:24.305 ************************************ 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:24.305 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:24.564 [2024-12-07 01:49:29.800112] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:24.564 [2024-12-07 01:49:29.800256] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:24.564 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:24.564 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.564 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:24.564 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.564 00:05:24.564 real 0m0.167s 00:05:24.564 user 0m0.086s 00:05:24.564 sys 0m0.079s 00:05:24.564 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:24.564 01:49:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:24.564 ************************************ 00:05:24.564 END TEST skip_rpc_with_delay 00:05:24.564 ************************************ 00:05:24.564 01:49:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:24.564 01:49:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:24.564 01:49:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:24.564 01:49:29 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.564 01:49:29 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.564 01:49:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.564 ************************************ 00:05:24.564 START TEST exit_on_failed_rpc_init 00:05:24.564 ************************************ 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69294 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69294 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69294 ']' 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.564 01:49:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.822 [2024-12-07 01:49:30.038352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:24.823 [2024-12-07 01:49:30.038487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69294 ] 00:05:24.823 [2024-12-07 01:49:30.184591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.823 [2024-12-07 01:49:30.229361] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:25.757 01:49:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:25.757 [2024-12-07 01:49:30.958683] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:25.757 [2024-12-07 01:49:30.958830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69306 ] 00:05:25.757 [2024-12-07 01:49:31.105902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.757 [2024-12-07 01:49:31.155422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.757 [2024-12-07 01:49:31.155521] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:25.757 [2024-12-07 01:49:31.155537] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:25.757 [2024-12-07 01:49:31.155550] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69294 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69294 ']' 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69294 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69294 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.016 killing process with pid 69294 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69294' 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69294 00:05:26.016 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69294 00:05:26.276 00:05:26.276 real 0m1.762s 00:05:26.276 user 0m1.929s 00:05:26.276 sys 0m0.480s 00:05:26.276 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.276 01:49:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:26.276 ************************************ 00:05:26.276 END TEST exit_on_failed_rpc_init 00:05:26.276 ************************************ 00:05:26.534 01:49:31 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:26.534 00:05:26.534 real 0m14.877s 00:05:26.534 user 0m13.859s 00:05:26.534 sys 0m1.905s 00:05:26.534 01:49:31 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.534 01:49:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.534 ************************************ 00:05:26.534 END TEST skip_rpc 00:05:26.534 ************************************ 00:05:26.534 01:49:31 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.534 01:49:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.534 01:49:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.534 01:49:31 -- common/autotest_common.sh@10 -- # set +x 00:05:26.534 ************************************ 00:05:26.534 START TEST rpc_client 00:05:26.534 ************************************ 00:05:26.534 01:49:31 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:26.534 * Looking for test storage... 00:05:26.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:26.534 01:49:31 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:26.534 01:49:31 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:26.534 01:49:31 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.793 01:49:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:26.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.793 --rc genhtml_branch_coverage=1 00:05:26.793 --rc genhtml_function_coverage=1 00:05:26.793 --rc genhtml_legend=1 00:05:26.793 --rc geninfo_all_blocks=1 00:05:26.793 --rc geninfo_unexecuted_blocks=1 00:05:26.793 00:05:26.793 ' 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:26.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.793 --rc genhtml_branch_coverage=1 00:05:26.793 --rc genhtml_function_coverage=1 00:05:26.793 --rc genhtml_legend=1 00:05:26.793 --rc geninfo_all_blocks=1 00:05:26.793 --rc geninfo_unexecuted_blocks=1 00:05:26.793 00:05:26.793 ' 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:26.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.793 --rc genhtml_branch_coverage=1 00:05:26.793 --rc genhtml_function_coverage=1 00:05:26.793 --rc genhtml_legend=1 00:05:26.793 --rc geninfo_all_blocks=1 00:05:26.793 --rc geninfo_unexecuted_blocks=1 00:05:26.793 00:05:26.793 ' 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:26.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.793 --rc genhtml_branch_coverage=1 00:05:26.793 --rc genhtml_function_coverage=1 00:05:26.793 --rc genhtml_legend=1 00:05:26.793 --rc geninfo_all_blocks=1 00:05:26.793 --rc geninfo_unexecuted_blocks=1 00:05:26.793 00:05:26.793 ' 00:05:26.793 01:49:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:26.793 OK 00:05:26.793 01:49:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:26.793 00:05:26.793 real 0m0.275s 00:05:26.793 user 0m0.136s 00:05:26.793 sys 0m0.152s 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.793 01:49:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:26.793 ************************************ 00:05:26.793 END TEST rpc_client 00:05:26.793 ************************************ 00:05:26.793 01:49:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:26.793 01:49:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.793 01:49:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.793 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:05:26.793 ************************************ 00:05:26.793 START TEST json_config 00:05:26.793 ************************************ 00:05:26.793 01:49:32 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.053 01:49:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.053 01:49:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.053 01:49:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.053 01:49:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.053 01:49:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.053 01:49:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:27.053 01:49:32 json_config -- scripts/common.sh@345 -- # : 1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.053 01:49:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.053 01:49:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@353 -- # local d=1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.053 01:49:32 json_config -- scripts/common.sh@355 -- # echo 1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.053 01:49:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@353 -- # local d=2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.053 01:49:32 json_config -- scripts/common.sh@355 -- # echo 2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.053 01:49:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.053 01:49:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.053 01:49:32 json_config -- scripts/common.sh@368 -- # return 0 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.053 --rc genhtml_branch_coverage=1 00:05:27.053 --rc genhtml_function_coverage=1 00:05:27.053 --rc genhtml_legend=1 00:05:27.053 --rc geninfo_all_blocks=1 00:05:27.053 --rc geninfo_unexecuted_blocks=1 00:05:27.053 00:05:27.053 ' 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.053 --rc genhtml_branch_coverage=1 00:05:27.053 --rc genhtml_function_coverage=1 00:05:27.053 --rc genhtml_legend=1 00:05:27.053 --rc geninfo_all_blocks=1 00:05:27.053 --rc geninfo_unexecuted_blocks=1 00:05:27.053 00:05:27.053 ' 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.053 --rc genhtml_branch_coverage=1 00:05:27.053 --rc genhtml_function_coverage=1 00:05:27.053 --rc genhtml_legend=1 00:05:27.053 --rc geninfo_all_blocks=1 00:05:27.053 --rc geninfo_unexecuted_blocks=1 00:05:27.053 00:05:27.053 ' 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.053 --rc genhtml_branch_coverage=1 00:05:27.053 --rc genhtml_function_coverage=1 00:05:27.053 --rc genhtml_legend=1 00:05:27.053 --rc geninfo_all_blocks=1 00:05:27.053 --rc geninfo_unexecuted_blocks=1 00:05:27.053 00:05:27.053 ' 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6385b0b-8611-4302-b9e4-4a678e16f84f 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f6385b0b-8611-4302-b9e4-4a678e16f84f 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.053 01:49:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.053 01:49:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.053 01:49:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.053 01:49:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.053 01:49:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.053 01:49:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.053 01:49:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.053 01:49:32 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.053 01:49:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@51 -- # : 0 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.053 01:49:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.053 WARNING: No tests are enabled so not running JSON configuration tests 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:27.053 01:49:32 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:27.053 00:05:27.053 real 0m0.231s 00:05:27.053 user 0m0.140s 00:05:27.053 sys 0m0.100s 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.053 01:49:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.053 ************************************ 00:05:27.053 END TEST json_config 00:05:27.053 ************************************ 00:05:27.053 01:49:32 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:27.053 01:49:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.053 01:49:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.053 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:05:27.053 ************************************ 00:05:27.054 START TEST json_config_extra_key 00:05:27.054 ************************************ 00:05:27.054 01:49:32 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:27.313 01:49:32 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.313 01:49:32 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.313 01:49:32 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.313 01:49:32 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.313 01:49:32 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:27.313 01:49:32 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.313 01:49:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.313 --rc genhtml_branch_coverage=1 00:05:27.313 --rc genhtml_function_coverage=1 00:05:27.313 --rc genhtml_legend=1 00:05:27.313 --rc geninfo_all_blocks=1 00:05:27.313 --rc geninfo_unexecuted_blocks=1 00:05:27.313 00:05:27.313 ' 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.314 --rc genhtml_branch_coverage=1 00:05:27.314 --rc genhtml_function_coverage=1 00:05:27.314 --rc genhtml_legend=1 00:05:27.314 --rc geninfo_all_blocks=1 00:05:27.314 --rc geninfo_unexecuted_blocks=1 00:05:27.314 00:05:27.314 ' 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.314 --rc genhtml_branch_coverage=1 00:05:27.314 --rc genhtml_function_coverage=1 00:05:27.314 --rc genhtml_legend=1 00:05:27.314 --rc geninfo_all_blocks=1 00:05:27.314 --rc geninfo_unexecuted_blocks=1 00:05:27.314 00:05:27.314 ' 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.314 --rc genhtml_branch_coverage=1 00:05:27.314 --rc genhtml_function_coverage=1 00:05:27.314 --rc genhtml_legend=1 00:05:27.314 --rc geninfo_all_blocks=1 00:05:27.314 --rc geninfo_unexecuted_blocks=1 00:05:27.314 00:05:27.314 ' 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f6385b0b-8611-4302-b9e4-4a678e16f84f 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f6385b0b-8611-4302-b9e4-4a678e16f84f 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.314 01:49:32 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.314 01:49:32 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.314 01:49:32 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.314 01:49:32 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.314 01:49:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.314 01:49:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.314 01:49:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.314 01:49:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:27.314 01:49:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.314 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.314 01:49:32 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:27.314 INFO: launching applications... 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:27.314 01:49:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69490 00:05:27.314 Waiting for target to run... 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69490 /var/tmp/spdk_tgt.sock 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69490 ']' 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:27.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:27.314 01:49:32 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:27.314 01:49:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:27.314 [2024-12-07 01:49:32.771936] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:27.314 [2024-12-07 01:49:32.772082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69490 ] 00:05:27.883 [2024-12-07 01:49:33.128489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.883 [2024-12-07 01:49:33.160282] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.453 01:49:33 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:28.453 01:49:33 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:28.453 00:05:28.453 INFO: shutting down applications... 00:05:28.453 01:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:28.453 01:49:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69490 ]] 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69490 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69490 00:05:28.453 01:49:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.712 01:49:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.712 01:49:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.712 01:49:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69490 00:05:28.712 01:49:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.712 01:49:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.713 01:49:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.713 SPDK target shutdown done 00:05:28.713 01:49:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.713 Success 00:05:28.713 01:49:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.713 ************************************ 00:05:28.713 END TEST json_config_extra_key 00:05:28.713 ************************************ 00:05:28.713 00:05:28.713 real 0m1.649s 00:05:28.713 user 0m1.396s 00:05:28.713 sys 0m0.464s 00:05:28.713 01:49:34 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.713 01:49:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.713 01:49:34 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.713 01:49:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.713 01:49:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.713 01:49:34 -- common/autotest_common.sh@10 -- # set +x 00:05:28.972 ************************************ 00:05:28.972 START TEST alias_rpc 00:05:28.972 ************************************ 00:05:28.972 01:49:34 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.972 * Looking for test storage... 00:05:28.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:28.972 01:49:34 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.972 01:49:34 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.972 01:49:34 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.972 01:49:34 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.972 01:49:34 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.972 01:49:34 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.973 01:49:34 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.973 --rc genhtml_branch_coverage=1 00:05:28.973 --rc genhtml_function_coverage=1 00:05:28.973 --rc genhtml_legend=1 00:05:28.973 --rc geninfo_all_blocks=1 00:05:28.973 --rc geninfo_unexecuted_blocks=1 00:05:28.973 00:05:28.973 ' 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.973 --rc genhtml_branch_coverage=1 00:05:28.973 --rc genhtml_function_coverage=1 00:05:28.973 --rc genhtml_legend=1 00:05:28.973 --rc geninfo_all_blocks=1 00:05:28.973 --rc geninfo_unexecuted_blocks=1 00:05:28.973 00:05:28.973 ' 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.973 --rc genhtml_branch_coverage=1 00:05:28.973 --rc genhtml_function_coverage=1 00:05:28.973 --rc genhtml_legend=1 00:05:28.973 --rc geninfo_all_blocks=1 00:05:28.973 --rc geninfo_unexecuted_blocks=1 00:05:28.973 00:05:28.973 ' 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.973 --rc genhtml_branch_coverage=1 00:05:28.973 --rc genhtml_function_coverage=1 00:05:28.973 --rc genhtml_legend=1 00:05:28.973 --rc geninfo_all_blocks=1 00:05:28.973 --rc geninfo_unexecuted_blocks=1 00:05:28.973 00:05:28.973 ' 00:05:28.973 01:49:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.973 01:49:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69569 00:05:28.973 01:49:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69569 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69569 ']' 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.973 01:49:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.973 01:49:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.233 [2024-12-07 01:49:34.492627] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:29.233 [2024-12-07 01:49:34.492775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69569 ] 00:05:29.233 [2024-12-07 01:49:34.618322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.233 [2024-12-07 01:49:34.668639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:30.177 01:49:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:30.177 01:49:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69569 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69569 ']' 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69569 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69569 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:30.177 killing process with pid 69569 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69569' 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@969 -- # kill 69569 00:05:30.177 01:49:35 alias_rpc -- common/autotest_common.sh@974 -- # wait 69569 00:05:30.748 ************************************ 00:05:30.748 END TEST alias_rpc 00:05:30.748 ************************************ 00:05:30.748 00:05:30.748 real 0m1.789s 00:05:30.748 user 0m1.851s 00:05:30.748 sys 0m0.477s 00:05:30.748 01:49:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:30.748 01:49:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.748 01:49:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:30.748 01:49:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:30.748 01:49:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.748 01:49:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.748 01:49:36 -- common/autotest_common.sh@10 -- # set +x 00:05:30.748 ************************************ 00:05:30.748 START TEST spdkcli_tcp 00:05:30.748 ************************************ 00:05:30.748 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:30.748 * Looking for test storage... 00:05:30.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:30.748 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:30.748 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:30.748 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:31.008 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.009 01:49:36 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.009 --rc genhtml_branch_coverage=1 00:05:31.009 --rc genhtml_function_coverage=1 00:05:31.009 --rc genhtml_legend=1 00:05:31.009 --rc geninfo_all_blocks=1 00:05:31.009 --rc geninfo_unexecuted_blocks=1 00:05:31.009 00:05:31.009 ' 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.009 --rc genhtml_branch_coverage=1 00:05:31.009 --rc genhtml_function_coverage=1 00:05:31.009 --rc genhtml_legend=1 00:05:31.009 --rc geninfo_all_blocks=1 00:05:31.009 --rc geninfo_unexecuted_blocks=1 00:05:31.009 00:05:31.009 ' 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.009 --rc genhtml_branch_coverage=1 00:05:31.009 --rc genhtml_function_coverage=1 00:05:31.009 --rc genhtml_legend=1 00:05:31.009 --rc geninfo_all_blocks=1 00:05:31.009 --rc geninfo_unexecuted_blocks=1 00:05:31.009 00:05:31.009 ' 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:31.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.009 --rc genhtml_branch_coverage=1 00:05:31.009 --rc genhtml_function_coverage=1 00:05:31.009 --rc genhtml_legend=1 00:05:31.009 --rc geninfo_all_blocks=1 00:05:31.009 --rc geninfo_unexecuted_blocks=1 00:05:31.009 00:05:31.009 ' 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69654 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:31.009 01:49:36 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69654 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69654 ']' 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.009 01:49:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:31.009 [2024-12-07 01:49:36.347378] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:31.009 [2024-12-07 01:49:36.347529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69654 ] 00:05:31.269 [2024-12-07 01:49:36.495843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.269 [2024-12-07 01:49:36.548111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.269 [2024-12-07 01:49:36.548230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.838 01:49:37 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:31.838 01:49:37 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:31.838 01:49:37 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:31.838 01:49:37 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69670 00:05:31.838 01:49:37 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:32.098 [ 00:05:32.098 "bdev_malloc_delete", 00:05:32.098 "bdev_malloc_create", 00:05:32.098 "bdev_null_resize", 00:05:32.098 "bdev_null_delete", 00:05:32.098 "bdev_null_create", 00:05:32.098 "bdev_nvme_cuse_unregister", 00:05:32.098 "bdev_nvme_cuse_register", 00:05:32.098 "bdev_opal_new_user", 00:05:32.098 "bdev_opal_set_lock_state", 00:05:32.098 "bdev_opal_delete", 00:05:32.098 "bdev_opal_get_info", 00:05:32.098 "bdev_opal_create", 00:05:32.098 "bdev_nvme_opal_revert", 00:05:32.098 "bdev_nvme_opal_init", 00:05:32.098 "bdev_nvme_send_cmd", 00:05:32.098 "bdev_nvme_set_keys", 00:05:32.098 "bdev_nvme_get_path_iostat", 00:05:32.098 "bdev_nvme_get_mdns_discovery_info", 00:05:32.098 "bdev_nvme_stop_mdns_discovery", 00:05:32.098 "bdev_nvme_start_mdns_discovery", 00:05:32.098 "bdev_nvme_set_multipath_policy", 00:05:32.098 "bdev_nvme_set_preferred_path", 00:05:32.098 "bdev_nvme_get_io_paths", 00:05:32.098 "bdev_nvme_remove_error_injection", 00:05:32.098 "bdev_nvme_add_error_injection", 00:05:32.098 "bdev_nvme_get_discovery_info", 00:05:32.098 "bdev_nvme_stop_discovery", 00:05:32.098 "bdev_nvme_start_discovery", 00:05:32.098 "bdev_nvme_get_controller_health_info", 00:05:32.098 "bdev_nvme_disable_controller", 00:05:32.098 "bdev_nvme_enable_controller", 00:05:32.098 "bdev_nvme_reset_controller", 00:05:32.098 "bdev_nvme_get_transport_statistics", 00:05:32.098 "bdev_nvme_apply_firmware", 00:05:32.098 "bdev_nvme_detach_controller", 00:05:32.098 "bdev_nvme_get_controllers", 00:05:32.098 "bdev_nvme_attach_controller", 00:05:32.098 "bdev_nvme_set_hotplug", 00:05:32.098 "bdev_nvme_set_options", 00:05:32.098 "bdev_passthru_delete", 00:05:32.098 "bdev_passthru_create", 00:05:32.098 "bdev_lvol_set_parent_bdev", 00:05:32.098 "bdev_lvol_set_parent", 00:05:32.098 "bdev_lvol_check_shallow_copy", 00:05:32.098 "bdev_lvol_start_shallow_copy", 00:05:32.098 "bdev_lvol_grow_lvstore", 00:05:32.098 "bdev_lvol_get_lvols", 00:05:32.098 "bdev_lvol_get_lvstores", 00:05:32.098 "bdev_lvol_delete", 00:05:32.098 "bdev_lvol_set_read_only", 00:05:32.098 "bdev_lvol_resize", 00:05:32.098 "bdev_lvol_decouple_parent", 00:05:32.098 "bdev_lvol_inflate", 00:05:32.098 "bdev_lvol_rename", 00:05:32.098 "bdev_lvol_clone_bdev", 00:05:32.098 "bdev_lvol_clone", 00:05:32.098 "bdev_lvol_snapshot", 00:05:32.098 "bdev_lvol_create", 00:05:32.098 "bdev_lvol_delete_lvstore", 00:05:32.098 "bdev_lvol_rename_lvstore", 00:05:32.098 "bdev_lvol_create_lvstore", 00:05:32.098 "bdev_raid_set_options", 00:05:32.098 "bdev_raid_remove_base_bdev", 00:05:32.098 "bdev_raid_add_base_bdev", 00:05:32.098 "bdev_raid_delete", 00:05:32.098 "bdev_raid_create", 00:05:32.098 "bdev_raid_get_bdevs", 00:05:32.098 "bdev_error_inject_error", 00:05:32.098 "bdev_error_delete", 00:05:32.098 "bdev_error_create", 00:05:32.098 "bdev_split_delete", 00:05:32.098 "bdev_split_create", 00:05:32.098 "bdev_delay_delete", 00:05:32.098 "bdev_delay_create", 00:05:32.098 "bdev_delay_update_latency", 00:05:32.098 "bdev_zone_block_delete", 00:05:32.098 "bdev_zone_block_create", 00:05:32.098 "blobfs_create", 00:05:32.098 "blobfs_detect", 00:05:32.098 "blobfs_set_cache_size", 00:05:32.098 "bdev_aio_delete", 00:05:32.098 "bdev_aio_rescan", 00:05:32.098 "bdev_aio_create", 00:05:32.098 "bdev_ftl_set_property", 00:05:32.098 "bdev_ftl_get_properties", 00:05:32.098 "bdev_ftl_get_stats", 00:05:32.098 "bdev_ftl_unmap", 00:05:32.098 "bdev_ftl_unload", 00:05:32.098 "bdev_ftl_delete", 00:05:32.098 "bdev_ftl_load", 00:05:32.098 "bdev_ftl_create", 00:05:32.098 "bdev_virtio_attach_controller", 00:05:32.098 "bdev_virtio_scsi_get_devices", 00:05:32.098 "bdev_virtio_detach_controller", 00:05:32.098 "bdev_virtio_blk_set_hotplug", 00:05:32.098 "bdev_iscsi_delete", 00:05:32.098 "bdev_iscsi_create", 00:05:32.098 "bdev_iscsi_set_options", 00:05:32.098 "accel_error_inject_error", 00:05:32.098 "ioat_scan_accel_module", 00:05:32.098 "dsa_scan_accel_module", 00:05:32.098 "iaa_scan_accel_module", 00:05:32.098 "keyring_file_remove_key", 00:05:32.098 "keyring_file_add_key", 00:05:32.098 "keyring_linux_set_options", 00:05:32.098 "fsdev_aio_delete", 00:05:32.098 "fsdev_aio_create", 00:05:32.098 "iscsi_get_histogram", 00:05:32.098 "iscsi_enable_histogram", 00:05:32.098 "iscsi_set_options", 00:05:32.098 "iscsi_get_auth_groups", 00:05:32.098 "iscsi_auth_group_remove_secret", 00:05:32.099 "iscsi_auth_group_add_secret", 00:05:32.099 "iscsi_delete_auth_group", 00:05:32.099 "iscsi_create_auth_group", 00:05:32.099 "iscsi_set_discovery_auth", 00:05:32.099 "iscsi_get_options", 00:05:32.099 "iscsi_target_node_request_logout", 00:05:32.099 "iscsi_target_node_set_redirect", 00:05:32.099 "iscsi_target_node_set_auth", 00:05:32.099 "iscsi_target_node_add_lun", 00:05:32.099 "iscsi_get_stats", 00:05:32.099 "iscsi_get_connections", 00:05:32.099 "iscsi_portal_group_set_auth", 00:05:32.099 "iscsi_start_portal_group", 00:05:32.099 "iscsi_delete_portal_group", 00:05:32.099 "iscsi_create_portal_group", 00:05:32.099 "iscsi_get_portal_groups", 00:05:32.099 "iscsi_delete_target_node", 00:05:32.099 "iscsi_target_node_remove_pg_ig_maps", 00:05:32.099 "iscsi_target_node_add_pg_ig_maps", 00:05:32.099 "iscsi_create_target_node", 00:05:32.099 "iscsi_get_target_nodes", 00:05:32.099 "iscsi_delete_initiator_group", 00:05:32.099 "iscsi_initiator_group_remove_initiators", 00:05:32.099 "iscsi_initiator_group_add_initiators", 00:05:32.099 "iscsi_create_initiator_group", 00:05:32.099 "iscsi_get_initiator_groups", 00:05:32.099 "nvmf_set_crdt", 00:05:32.099 "nvmf_set_config", 00:05:32.099 "nvmf_set_max_subsystems", 00:05:32.099 "nvmf_stop_mdns_prr", 00:05:32.099 "nvmf_publish_mdns_prr", 00:05:32.099 "nvmf_subsystem_get_listeners", 00:05:32.099 "nvmf_subsystem_get_qpairs", 00:05:32.099 "nvmf_subsystem_get_controllers", 00:05:32.099 "nvmf_get_stats", 00:05:32.099 "nvmf_get_transports", 00:05:32.099 "nvmf_create_transport", 00:05:32.099 "nvmf_get_targets", 00:05:32.099 "nvmf_delete_target", 00:05:32.099 "nvmf_create_target", 00:05:32.099 "nvmf_subsystem_allow_any_host", 00:05:32.099 "nvmf_subsystem_set_keys", 00:05:32.099 "nvmf_subsystem_remove_host", 00:05:32.099 "nvmf_subsystem_add_host", 00:05:32.099 "nvmf_ns_remove_host", 00:05:32.099 "nvmf_ns_add_host", 00:05:32.099 "nvmf_subsystem_remove_ns", 00:05:32.099 "nvmf_subsystem_set_ns_ana_group", 00:05:32.099 "nvmf_subsystem_add_ns", 00:05:32.099 "nvmf_subsystem_listener_set_ana_state", 00:05:32.099 "nvmf_discovery_get_referrals", 00:05:32.099 "nvmf_discovery_remove_referral", 00:05:32.099 "nvmf_discovery_add_referral", 00:05:32.099 "nvmf_subsystem_remove_listener", 00:05:32.099 "nvmf_subsystem_add_listener", 00:05:32.099 "nvmf_delete_subsystem", 00:05:32.099 "nvmf_create_subsystem", 00:05:32.099 "nvmf_get_subsystems", 00:05:32.099 "env_dpdk_get_mem_stats", 00:05:32.099 "nbd_get_disks", 00:05:32.099 "nbd_stop_disk", 00:05:32.099 "nbd_start_disk", 00:05:32.099 "ublk_recover_disk", 00:05:32.099 "ublk_get_disks", 00:05:32.099 "ublk_stop_disk", 00:05:32.099 "ublk_start_disk", 00:05:32.099 "ublk_destroy_target", 00:05:32.099 "ublk_create_target", 00:05:32.099 "virtio_blk_create_transport", 00:05:32.099 "virtio_blk_get_transports", 00:05:32.099 "vhost_controller_set_coalescing", 00:05:32.099 "vhost_get_controllers", 00:05:32.099 "vhost_delete_controller", 00:05:32.099 "vhost_create_blk_controller", 00:05:32.099 "vhost_scsi_controller_remove_target", 00:05:32.099 "vhost_scsi_controller_add_target", 00:05:32.099 "vhost_start_scsi_controller", 00:05:32.099 "vhost_create_scsi_controller", 00:05:32.099 "thread_set_cpumask", 00:05:32.099 "scheduler_set_options", 00:05:32.099 "framework_get_governor", 00:05:32.099 "framework_get_scheduler", 00:05:32.099 "framework_set_scheduler", 00:05:32.099 "framework_get_reactors", 00:05:32.099 "thread_get_io_channels", 00:05:32.099 "thread_get_pollers", 00:05:32.099 "thread_get_stats", 00:05:32.099 "framework_monitor_context_switch", 00:05:32.099 "spdk_kill_instance", 00:05:32.099 "log_enable_timestamps", 00:05:32.099 "log_get_flags", 00:05:32.099 "log_clear_flag", 00:05:32.099 "log_set_flag", 00:05:32.099 "log_get_level", 00:05:32.099 "log_set_level", 00:05:32.099 "log_get_print_level", 00:05:32.099 "log_set_print_level", 00:05:32.099 "framework_enable_cpumask_locks", 00:05:32.099 "framework_disable_cpumask_locks", 00:05:32.099 "framework_wait_init", 00:05:32.099 "framework_start_init", 00:05:32.099 "scsi_get_devices", 00:05:32.099 "bdev_get_histogram", 00:05:32.099 "bdev_enable_histogram", 00:05:32.099 "bdev_set_qos_limit", 00:05:32.099 "bdev_set_qd_sampling_period", 00:05:32.099 "bdev_get_bdevs", 00:05:32.099 "bdev_reset_iostat", 00:05:32.099 "bdev_get_iostat", 00:05:32.099 "bdev_examine", 00:05:32.099 "bdev_wait_for_examine", 00:05:32.099 "bdev_set_options", 00:05:32.099 "accel_get_stats", 00:05:32.099 "accel_set_options", 00:05:32.099 "accel_set_driver", 00:05:32.099 "accel_crypto_key_destroy", 00:05:32.099 "accel_crypto_keys_get", 00:05:32.099 "accel_crypto_key_create", 00:05:32.099 "accel_assign_opc", 00:05:32.099 "accel_get_module_info", 00:05:32.099 "accel_get_opc_assignments", 00:05:32.099 "vmd_rescan", 00:05:32.099 "vmd_remove_device", 00:05:32.099 "vmd_enable", 00:05:32.099 "sock_get_default_impl", 00:05:32.099 "sock_set_default_impl", 00:05:32.099 "sock_impl_set_options", 00:05:32.099 "sock_impl_get_options", 00:05:32.099 "iobuf_get_stats", 00:05:32.099 "iobuf_set_options", 00:05:32.099 "keyring_get_keys", 00:05:32.099 "framework_get_pci_devices", 00:05:32.099 "framework_get_config", 00:05:32.099 "framework_get_subsystems", 00:05:32.099 "fsdev_set_opts", 00:05:32.099 "fsdev_get_opts", 00:05:32.099 "trace_get_info", 00:05:32.099 "trace_get_tpoint_group_mask", 00:05:32.099 "trace_disable_tpoint_group", 00:05:32.099 "trace_enable_tpoint_group", 00:05:32.099 "trace_clear_tpoint_mask", 00:05:32.099 "trace_set_tpoint_mask", 00:05:32.099 "notify_get_notifications", 00:05:32.099 "notify_get_types", 00:05:32.099 "spdk_get_version", 00:05:32.099 "rpc_get_methods" 00:05:32.099 ] 00:05:32.099 01:49:37 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.099 01:49:37 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:32.099 01:49:37 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69654 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69654 ']' 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69654 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69654 00:05:32.099 killing process with pid 69654 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69654' 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69654 00:05:32.099 01:49:37 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69654 00:05:32.670 ************************************ 00:05:32.670 END TEST spdkcli_tcp 00:05:32.670 ************************************ 00:05:32.670 00:05:32.670 real 0m1.859s 00:05:32.670 user 0m3.119s 00:05:32.670 sys 0m0.565s 00:05:32.670 01:49:37 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.670 01:49:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.670 01:49:37 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.670 01:49:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.670 01:49:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.670 01:49:37 -- common/autotest_common.sh@10 -- # set +x 00:05:32.670 ************************************ 00:05:32.670 START TEST dpdk_mem_utility 00:05:32.670 ************************************ 00:05:32.670 01:49:37 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:32.670 * Looking for test storage... 00:05:32.670 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:32.670 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.670 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.670 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.931 01:49:38 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.931 --rc genhtml_branch_coverage=1 00:05:32.931 --rc genhtml_function_coverage=1 00:05:32.931 --rc genhtml_legend=1 00:05:32.931 --rc geninfo_all_blocks=1 00:05:32.931 --rc geninfo_unexecuted_blocks=1 00:05:32.931 00:05:32.931 ' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.931 --rc genhtml_branch_coverage=1 00:05:32.931 --rc genhtml_function_coverage=1 00:05:32.931 --rc genhtml_legend=1 00:05:32.931 --rc geninfo_all_blocks=1 00:05:32.931 --rc geninfo_unexecuted_blocks=1 00:05:32.931 00:05:32.931 ' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.931 --rc genhtml_branch_coverage=1 00:05:32.931 --rc genhtml_function_coverage=1 00:05:32.931 --rc genhtml_legend=1 00:05:32.931 --rc geninfo_all_blocks=1 00:05:32.931 --rc geninfo_unexecuted_blocks=1 00:05:32.931 00:05:32.931 ' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.931 --rc genhtml_branch_coverage=1 00:05:32.931 --rc genhtml_function_coverage=1 00:05:32.931 --rc genhtml_legend=1 00:05:32.931 --rc geninfo_all_blocks=1 00:05:32.931 --rc geninfo_unexecuted_blocks=1 00:05:32.931 00:05:32.931 ' 00:05:32.931 01:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:32.931 01:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.931 01:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69748 00:05:32.931 01:49:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69748 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 69748 ']' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.931 01:49:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:32.932 [2024-12-07 01:49:38.292146] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:32.932 [2024-12-07 01:49:38.292311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69748 ] 00:05:33.232 [2024-12-07 01:49:38.439157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.232 [2024-12-07 01:49:38.489701] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.821 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:33.821 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:33.821 01:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:33.821 01:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:33.821 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.821 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:33.821 { 00:05:33.821 "filename": "/tmp/spdk_mem_dump.txt" 00:05:33.821 } 00:05:33.821 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.821 01:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:33.821 DPDK memory size 860.000000 MiB in 1 heap(s) 00:05:33.821 1 heaps totaling size 860.000000 MiB 00:05:33.821 size: 860.000000 MiB heap id: 0 00:05:33.821 end heaps---------- 00:05:33.821 9 mempools totaling size 642.649841 MiB 00:05:33.821 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:33.821 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:33.821 size: 92.545471 MiB name: bdev_io_69748 00:05:33.821 size: 51.011292 MiB name: evtpool_69748 00:05:33.821 size: 50.003479 MiB name: msgpool_69748 00:05:33.821 size: 36.509338 MiB name: fsdev_io_69748 00:05:33.821 size: 21.763794 MiB name: PDU_Pool 00:05:33.821 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:33.821 size: 0.026123 MiB name: Session_Pool 00:05:33.821 end mempools------- 00:05:33.821 6 memzones totaling size 4.142822 MiB 00:05:33.821 size: 1.000366 MiB name: RG_ring_0_69748 00:05:33.821 size: 1.000366 MiB name: RG_ring_1_69748 00:05:33.821 size: 1.000366 MiB name: RG_ring_4_69748 00:05:33.821 size: 1.000366 MiB name: RG_ring_5_69748 00:05:33.821 size: 0.125366 MiB name: RG_ring_2_69748 00:05:33.821 size: 0.015991 MiB name: RG_ring_3_69748 00:05:33.821 end memzones------- 00:05:33.821 01:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:33.821 heap id: 0 total size: 860.000000 MiB number of busy elements: 315 number of free elements: 16 00:05:33.821 list of free elements. size: 13.935059 MiB 00:05:33.821 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:33.821 element at address: 0x200000800000 with size: 1.996948 MiB 00:05:33.821 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:05:33.821 element at address: 0x20001be00000 with size: 0.999878 MiB 00:05:33.821 element at address: 0x200034a00000 with size: 0.994446 MiB 00:05:33.821 element at address: 0x200009600000 with size: 0.959839 MiB 00:05:33.821 element at address: 0x200015e00000 with size: 0.954285 MiB 00:05:33.821 element at address: 0x20001c000000 with size: 0.936584 MiB 00:05:33.821 element at address: 0x200000200000 with size: 0.835022 MiB 00:05:33.821 element at address: 0x20001d800000 with size: 0.567139 MiB 00:05:33.821 element at address: 0x20000d800000 with size: 0.489258 MiB 00:05:33.821 element at address: 0x200003e00000 with size: 0.487183 MiB 00:05:33.821 element at address: 0x20001c200000 with size: 0.485657 MiB 00:05:33.821 element at address: 0x200007000000 with size: 0.480469 MiB 00:05:33.821 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:05:33.821 element at address: 0x200003a00000 with size: 0.353210 MiB 00:05:33.821 list of standard malloc elements. size: 199.268250 MiB 00:05:33.821 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:05:33.821 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:05:33.821 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:05:33.822 element at address: 0x20001befff80 with size: 1.000122 MiB 00:05:33.822 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:05:33.822 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:33.822 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:05:33.822 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:33.822 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:05:33.822 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003aff880 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b000 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b180 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b240 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b300 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b480 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b540 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b600 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:05:33.822 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:05:33.822 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891300 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891480 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891540 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891600 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891780 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891840 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891900 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:05:33.822 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892080 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892140 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892200 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892380 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892440 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892500 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892680 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892740 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892800 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892980 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893040 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893100 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893280 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893340 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893400 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893580 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893640 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893700 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893880 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893940 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894000 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894180 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894240 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894300 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894480 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894540 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894600 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894780 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894840 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894900 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d895080 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d895140 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d895200 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d895380 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20001d895440 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:05:33.823 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:05:33.824 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:05:33.824 list of memzone associated elements. size: 646.796692 MiB 00:05:33.824 element at address: 0x20001d895500 with size: 211.416748 MiB 00:05:33.824 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:33.824 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:05:33.824 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:33.824 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:05:33.824 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69748_0 00:05:33.824 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:33.824 associated memzone info: size: 48.002930 MiB name: MP_evtpool_69748_0 00:05:33.824 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:33.824 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69748_0 00:05:33.824 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:05:33.824 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69748_0 00:05:33.824 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:05:33.824 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:33.824 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:05:33.824 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:33.824 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:33.824 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_69748 00:05:33.824 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:33.824 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69748 00:05:33.824 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:33.824 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69748 00:05:33.824 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:05:33.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:33.824 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:05:33.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:33.824 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:05:33.824 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:33.824 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:05:33.824 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:33.824 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:33.824 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69748 00:05:33.824 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:33.824 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69748 00:05:33.824 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:05:33.824 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69748 00:05:33.824 element at address: 0x200034afe940 with size: 1.000488 MiB 00:05:33.824 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69748 00:05:33.824 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:05:33.824 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69748 00:05:33.824 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:05:33.824 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69748 00:05:33.824 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:05:33.824 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:33.824 element at address: 0x20000707b780 with size: 0.500488 MiB 00:05:33.824 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:33.824 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:05:33.824 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:33.824 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:05:33.824 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69748 00:05:33.824 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:05:33.824 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:33.824 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:05:33.824 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:33.824 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:05:33.824 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69748 00:05:33.824 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:05:33.824 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:33.824 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:33.824 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69748 00:05:33.824 element at address: 0x200003aff940 with size: 0.000305 MiB 00:05:33.824 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69748 00:05:33.824 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:05:33.824 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69748 00:05:33.824 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:05:33.824 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:33.824 01:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:33.824 01:49:39 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69748 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 69748 ']' 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 69748 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69748 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.824 killing process with pid 69748 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69748' 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 69748 00:05:33.824 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 69748 00:05:34.396 00:05:34.396 real 0m1.707s 00:05:34.396 user 0m1.661s 00:05:34.396 sys 0m0.508s 00:05:34.396 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.396 01:49:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:34.396 ************************************ 00:05:34.396 END TEST dpdk_mem_utility 00:05:34.396 ************************************ 00:05:34.396 01:49:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:34.396 01:49:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.396 01:49:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.396 01:49:39 -- common/autotest_common.sh@10 -- # set +x 00:05:34.396 ************************************ 00:05:34.396 START TEST event 00:05:34.396 ************************************ 00:05:34.396 01:49:39 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:34.396 * Looking for test storage... 00:05:34.396 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:34.396 01:49:39 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:34.396 01:49:39 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:34.396 01:49:39 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:34.656 01:49:39 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:34.656 01:49:39 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.656 01:49:39 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.656 01:49:39 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.656 01:49:39 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.656 01:49:39 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.656 01:49:39 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.656 01:49:39 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.656 01:49:39 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.656 01:49:39 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.656 01:49:39 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.656 01:49:39 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.656 01:49:39 event -- scripts/common.sh@344 -- # case "$op" in 00:05:34.656 01:49:39 event -- scripts/common.sh@345 -- # : 1 00:05:34.656 01:49:39 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.656 01:49:39 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.656 01:49:39 event -- scripts/common.sh@365 -- # decimal 1 00:05:34.656 01:49:39 event -- scripts/common.sh@353 -- # local d=1 00:05:34.656 01:49:39 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.656 01:49:39 event -- scripts/common.sh@355 -- # echo 1 00:05:34.656 01:49:39 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.656 01:49:39 event -- scripts/common.sh@366 -- # decimal 2 00:05:34.656 01:49:39 event -- scripts/common.sh@353 -- # local d=2 00:05:34.656 01:49:39 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.656 01:49:39 event -- scripts/common.sh@355 -- # echo 2 00:05:34.656 01:49:39 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.656 01:49:39 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.656 01:49:39 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.656 01:49:39 event -- scripts/common.sh@368 -- # return 0 00:05:34.656 01:49:39 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.656 01:49:39 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:34.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.656 --rc genhtml_branch_coverage=1 00:05:34.656 --rc genhtml_function_coverage=1 00:05:34.656 --rc genhtml_legend=1 00:05:34.656 --rc geninfo_all_blocks=1 00:05:34.656 --rc geninfo_unexecuted_blocks=1 00:05:34.656 00:05:34.656 ' 00:05:34.656 01:49:39 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:34.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.656 --rc genhtml_branch_coverage=1 00:05:34.656 --rc genhtml_function_coverage=1 00:05:34.656 --rc genhtml_legend=1 00:05:34.656 --rc geninfo_all_blocks=1 00:05:34.656 --rc geninfo_unexecuted_blocks=1 00:05:34.656 00:05:34.656 ' 00:05:34.656 01:49:39 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:34.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.656 --rc genhtml_branch_coverage=1 00:05:34.656 --rc genhtml_function_coverage=1 00:05:34.656 --rc genhtml_legend=1 00:05:34.656 --rc geninfo_all_blocks=1 00:05:34.656 --rc geninfo_unexecuted_blocks=1 00:05:34.656 00:05:34.656 ' 00:05:34.657 01:49:39 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:34.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.657 --rc genhtml_branch_coverage=1 00:05:34.657 --rc genhtml_function_coverage=1 00:05:34.657 --rc genhtml_legend=1 00:05:34.657 --rc geninfo_all_blocks=1 00:05:34.657 --rc geninfo_unexecuted_blocks=1 00:05:34.657 00:05:34.657 ' 00:05:34.657 01:49:39 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:34.657 01:49:39 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:34.657 01:49:39 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:34.657 01:49:39 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:34.657 01:49:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.657 01:49:39 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.657 ************************************ 00:05:34.657 START TEST event_perf 00:05:34.657 ************************************ 00:05:34.657 01:49:39 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:34.657 Running I/O for 1 seconds...[2024-12-07 01:49:39.988509] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:34.657 [2024-12-07 01:49:39.988646] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69829 ] 00:05:34.916 [2024-12-07 01:49:40.125609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:34.916 [2024-12-07 01:49:40.179427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.916 [2024-12-07 01:49:40.179700] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.916 [2024-12-07 01:49:40.179777] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.916 Running I/O for 1 seconds...[2024-12-07 01:49:40.179866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.856 00:05:35.856 lcore 0: 194702 00:05:35.856 lcore 1: 194703 00:05:35.856 lcore 2: 194704 00:05:35.856 lcore 3: 194704 00:05:35.856 done. 00:05:35.856 00:05:35.856 real 0m1.325s 00:05:35.856 user 0m4.112s 00:05:35.856 sys 0m0.092s 00:05:35.856 01:49:41 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:35.856 01:49:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.856 ************************************ 00:05:35.856 END TEST event_perf 00:05:35.856 ************************************ 00:05:36.116 01:49:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:36.116 01:49:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:36.116 01:49:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.116 01:49:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.116 ************************************ 00:05:36.116 START TEST event_reactor 00:05:36.116 ************************************ 00:05:36.116 01:49:41 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:36.116 [2024-12-07 01:49:41.383921] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:36.116 [2024-12-07 01:49:41.384068] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69869 ] 00:05:36.116 [2024-12-07 01:49:41.529695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.376 [2024-12-07 01:49:41.581591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.427 test_start 00:05:37.427 oneshot 00:05:37.427 tick 100 00:05:37.427 tick 100 00:05:37.427 tick 250 00:05:37.427 tick 100 00:05:37.427 tick 100 00:05:37.427 tick 100 00:05:37.427 tick 250 00:05:37.427 tick 500 00:05:37.427 tick 100 00:05:37.427 tick 100 00:05:37.427 tick 250 00:05:37.427 tick 100 00:05:37.427 tick 100 00:05:37.427 test_end 00:05:37.427 00:05:37.427 real 0m1.330s 00:05:37.427 user 0m1.134s 00:05:37.427 sys 0m0.089s 00:05:37.427 01:49:42 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.427 01:49:42 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:37.427 ************************************ 00:05:37.427 END TEST event_reactor 00:05:37.427 ************************************ 00:05:37.427 01:49:42 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:37.427 01:49:42 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:37.427 01:49:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.427 01:49:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.427 ************************************ 00:05:37.427 START TEST event_reactor_perf 00:05:37.427 ************************************ 00:05:37.427 01:49:42 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:37.427 [2024-12-07 01:49:42.779009] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:37.427 [2024-12-07 01:49:42.779157] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69905 ] 00:05:37.687 [2024-12-07 01:49:42.922417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.687 [2024-12-07 01:49:42.973407] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.626 test_start 00:05:38.626 test_end 00:05:38.626 Performance: 374864 events per second 00:05:38.626 00:05:38.626 real 0m1.327s 00:05:38.626 user 0m1.131s 00:05:38.626 sys 0m0.088s 00:05:38.626 01:49:44 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:38.626 01:49:44 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:38.626 ************************************ 00:05:38.626 END TEST event_reactor_perf 00:05:38.626 ************************************ 00:05:38.886 01:49:44 event -- event/event.sh@49 -- # uname -s 00:05:38.886 01:49:44 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:38.886 01:49:44 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:38.886 01:49:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:38.886 01:49:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:38.886 01:49:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.886 ************************************ 00:05:38.886 START TEST event_scheduler 00:05:38.886 ************************************ 00:05:38.886 01:49:44 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:38.886 * Looking for test storage... 00:05:38.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:38.886 01:49:44 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:38.886 01:49:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:38.886 01:49:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:38.886 01:49:44 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.886 01:49:44 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.146 01:49:44 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:39.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.146 --rc genhtml_branch_coverage=1 00:05:39.146 --rc genhtml_function_coverage=1 00:05:39.146 --rc genhtml_legend=1 00:05:39.146 --rc geninfo_all_blocks=1 00:05:39.146 --rc geninfo_unexecuted_blocks=1 00:05:39.146 00:05:39.146 ' 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:39.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.146 --rc genhtml_branch_coverage=1 00:05:39.146 --rc genhtml_function_coverage=1 00:05:39.146 --rc genhtml_legend=1 00:05:39.146 --rc geninfo_all_blocks=1 00:05:39.146 --rc geninfo_unexecuted_blocks=1 00:05:39.146 00:05:39.146 ' 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:39.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.146 --rc genhtml_branch_coverage=1 00:05:39.146 --rc genhtml_function_coverage=1 00:05:39.146 --rc genhtml_legend=1 00:05:39.146 --rc geninfo_all_blocks=1 00:05:39.146 --rc geninfo_unexecuted_blocks=1 00:05:39.146 00:05:39.146 ' 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:39.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.146 --rc genhtml_branch_coverage=1 00:05:39.146 --rc genhtml_function_coverage=1 00:05:39.146 --rc genhtml_legend=1 00:05:39.146 --rc geninfo_all_blocks=1 00:05:39.146 --rc geninfo_unexecuted_blocks=1 00:05:39.146 00:05:39.146 ' 00:05:39.146 01:49:44 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:39.146 01:49:44 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69975 00:05:39.146 01:49:44 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:39.146 01:49:44 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.146 01:49:44 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69975 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 69975 ']' 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.146 01:49:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.146 [2024-12-07 01:49:44.432775] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:39.146 [2024-12-07 01:49:44.432933] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69975 ] 00:05:39.146 [2024-12-07 01:49:44.565150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:39.405 [2024-12-07 01:49:44.618891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.405 [2024-12-07 01:49:44.619089] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.405 [2024-12-07 01:49:44.619146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.405 [2024-12-07 01:49:44.619277] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:39.974 01:49:45 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.974 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.974 POWER: Cannot set governor of lcore 0 to userspace 00:05:39.974 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.974 POWER: Cannot set governor of lcore 0 to performance 00:05:39.974 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:39.974 POWER: Cannot set governor of lcore 0 to userspace 00:05:39.974 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:39.974 POWER: Unable to set Power Management Environment for lcore 0 00:05:39.974 [2024-12-07 01:49:45.336362] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:39.974 [2024-12-07 01:49:45.336446] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:39.974 [2024-12-07 01:49:45.336525] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:39.974 [2024-12-07 01:49:45.336614] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:39.974 [2024-12-07 01:49:45.336693] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:39.974 [2024-12-07 01:49:45.336740] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.974 01:49:45 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.974 [2024-12-07 01:49:45.408098] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:39.974 01:49:45 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.974 01:49:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.974 ************************************ 00:05:39.974 START TEST scheduler_create_thread 00:05:39.974 ************************************ 00:05:39.974 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:39.974 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:39.974 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.974 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.233 2 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.233 3 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.233 4 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.233 5 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:40.233 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 6 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 7 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 8 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.234 9 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.234 01:49:45 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.615 10 00:05:41.615 01:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:41.615 01:49:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:41.615 01:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:41.615 01:49:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.184 01:49:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:42.184 01:49:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.184 01:49:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.184 01:49:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:42.184 01:49:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.121 01:49:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.121 01:49:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:43.121 01:49:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.121 01:49:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.688 01:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.688 01:49:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:43.688 01:49:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:43.688 01:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.688 01:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.255 ************************************ 00:05:44.255 END TEST scheduler_create_thread 00:05:44.255 ************************************ 00:05:44.255 01:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.255 00:05:44.255 real 0m4.203s 00:05:44.255 user 0m0.030s 00:05:44.255 sys 0m0.006s 00:05:44.255 01:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.255 01:49:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.255 01:49:49 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:44.255 01:49:49 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69975 00:05:44.255 01:49:49 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 69975 ']' 00:05:44.255 01:49:49 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 69975 00:05:44.255 01:49:49 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:44.255 01:49:49 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:44.255 01:49:49 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69975 00:05:44.514 killing process with pid 69975 00:05:44.514 01:49:49 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:44.514 01:49:49 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:44.514 01:49:49 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69975' 00:05:44.514 01:49:49 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 69975 00:05:44.514 01:49:49 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 69975 00:05:44.514 [2024-12-07 01:49:49.904195] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.773 ************************************ 00:05:44.773 END TEST event_scheduler 00:05:44.773 ************************************ 00:05:44.773 00:05:44.773 real 0m6.073s 00:05:44.773 user 0m13.802s 00:05:44.773 sys 0m0.499s 00:05:44.773 01:49:50 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.773 01:49:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.033 01:49:50 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.033 01:49:50 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.033 01:49:50 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.033 01:49:50 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.033 01:49:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.033 ************************************ 00:05:45.033 START TEST app_repeat 00:05:45.033 ************************************ 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70087 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70087' 00:05:45.033 Process app_repeat pid: 70087 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:45.033 spdk_app_start Round 0 00:05:45.033 01:49:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70087 /var/tmp/spdk-nbd.sock 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70087 ']' 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.033 01:49:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.033 [2024-12-07 01:49:50.345894] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:05:45.033 [2024-12-07 01:49:50.346102] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70087 ] 00:05:45.033 [2024-12-07 01:49:50.475220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.291 [2024-12-07 01:49:50.527798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.291 [2024-12-07 01:49:50.527913] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.857 01:49:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.857 01:49:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:45.857 01:49:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.116 Malloc0 00:05:46.116 01:49:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.375 Malloc1 00:05:46.375 01:49:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.375 01:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.635 /dev/nbd0 00:05:46.635 01:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.635 01:49:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.635 1+0 records in 00:05:46.635 1+0 records out 00:05:46.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483029 s, 8.5 MB/s 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.635 01:49:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.635 01:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.635 01:49:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.635 01:49:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.893 /dev/nbd1 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.893 1+0 records in 00:05:46.893 1+0 records out 00:05:46.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273691 s, 15.0 MB/s 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:46.893 01:49:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.893 01:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.152 { 00:05:47.152 "nbd_device": "/dev/nbd0", 00:05:47.152 "bdev_name": "Malloc0" 00:05:47.152 }, 00:05:47.152 { 00:05:47.152 "nbd_device": "/dev/nbd1", 00:05:47.152 "bdev_name": "Malloc1" 00:05:47.152 } 00:05:47.152 ]' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.152 { 00:05:47.152 "nbd_device": "/dev/nbd0", 00:05:47.152 "bdev_name": "Malloc0" 00:05:47.152 }, 00:05:47.152 { 00:05:47.152 "nbd_device": "/dev/nbd1", 00:05:47.152 "bdev_name": "Malloc1" 00:05:47.152 } 00:05:47.152 ]' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.152 /dev/nbd1' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.152 /dev/nbd1' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.152 256+0 records in 00:05:47.152 256+0 records out 00:05:47.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516175 s, 203 MB/s 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.152 256+0 records in 00:05:47.152 256+0 records out 00:05:47.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166844 s, 62.8 MB/s 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.152 256+0 records in 00:05:47.152 256+0 records out 00:05:47.152 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163945 s, 64.0 MB/s 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.152 01:49:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.412 01:49:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.671 01:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.930 01:49:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.930 01:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.930 01:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.931 01:49:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.931 01:49:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.190 01:49:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.450 [2024-12-07 01:49:53.717200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.450 [2024-12-07 01:49:53.762815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.450 [2024-12-07 01:49:53.762815] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.450 [2024-12-07 01:49:53.805186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.450 [2024-12-07 01:49:53.805272] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.740 spdk_app_start Round 1 00:05:51.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.740 01:49:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:51.740 01:49:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:51.740 01:49:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70087 /var/tmp/spdk-nbd.sock 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70087 ']' 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:51.740 01:49:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:51.740 01:49:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.740 Malloc0 00:05:51.740 01:49:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.740 Malloc1 00:05:52.000 01:49:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:52.001 /dev/nbd0 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.001 1+0 records in 00:05:52.001 1+0 records out 00:05:52.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277414 s, 14.8 MB/s 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.001 01:49:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.001 01:49:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.262 /dev/nbd1 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.262 1+0 records in 00:05:52.262 1+0 records out 00:05:52.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377376 s, 10.9 MB/s 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:52.262 01:49:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.262 01:49:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.522 { 00:05:52.522 "nbd_device": "/dev/nbd0", 00:05:52.522 "bdev_name": "Malloc0" 00:05:52.522 }, 00:05:52.522 { 00:05:52.522 "nbd_device": "/dev/nbd1", 00:05:52.522 "bdev_name": "Malloc1" 00:05:52.522 } 00:05:52.522 ]' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.522 { 00:05:52.522 "nbd_device": "/dev/nbd0", 00:05:52.522 "bdev_name": "Malloc0" 00:05:52.522 }, 00:05:52.522 { 00:05:52.522 "nbd_device": "/dev/nbd1", 00:05:52.522 "bdev_name": "Malloc1" 00:05:52.522 } 00:05:52.522 ]' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.522 /dev/nbd1' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.522 /dev/nbd1' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.522 01:49:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.781 256+0 records in 00:05:52.781 256+0 records out 00:05:52.781 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630341 s, 166 MB/s 00:05:52.782 01:49:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.782 01:49:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.782 256+0 records in 00:05:52.782 256+0 records out 00:05:52.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241514 s, 43.4 MB/s 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.782 256+0 records in 00:05:52.782 256+0 records out 00:05:52.782 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261873 s, 40.0 MB/s 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.782 01:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:53.041 01:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:53.041 01:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.042 01:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.302 01:49:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.302 01:49:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.566 01:49:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.837 [2024-12-07 01:49:59.118325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.837 [2024-12-07 01:49:59.165999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.837 [2024-12-07 01:49:59.166028] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.837 [2024-12-07 01:49:59.209462] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.837 [2024-12-07 01:49:59.209517] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.146 spdk_app_start Round 2 00:05:57.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.146 01:50:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.146 01:50:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:57.146 01:50:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70087 /var/tmp/spdk-nbd.sock 00:05:57.146 01:50:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70087 ']' 00:05:57.146 01:50:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.146 01:50:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.146 01:50:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.146 01:50:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.146 01:50:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.146 01:50:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.146 01:50:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:57.146 01:50:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.146 Malloc0 00:05:57.146 01:50:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.146 Malloc1 00:05:57.146 01:50:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.146 01:50:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.405 /dev/nbd0 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.405 1+0 records in 00:05:57.405 1+0 records out 00:05:57.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169 s, 24.2 MB/s 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.405 01:50:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.405 01:50:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.664 /dev/nbd1 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.664 1+0 records in 00:05:57.664 1+0 records out 00:05:57.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016254 s, 25.2 MB/s 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:57.664 01:50:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.664 01:50:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.923 { 00:05:57.923 "nbd_device": "/dev/nbd0", 00:05:57.923 "bdev_name": "Malloc0" 00:05:57.923 }, 00:05:57.923 { 00:05:57.923 "nbd_device": "/dev/nbd1", 00:05:57.923 "bdev_name": "Malloc1" 00:05:57.923 } 00:05:57.923 ]' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.923 { 00:05:57.923 "nbd_device": "/dev/nbd0", 00:05:57.923 "bdev_name": "Malloc0" 00:05:57.923 }, 00:05:57.923 { 00:05:57.923 "nbd_device": "/dev/nbd1", 00:05:57.923 "bdev_name": "Malloc1" 00:05:57.923 } 00:05:57.923 ]' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.923 /dev/nbd1' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.923 /dev/nbd1' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.923 256+0 records in 00:05:57.923 256+0 records out 00:05:57.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449078 s, 233 MB/s 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.923 256+0 records in 00:05:57.923 256+0 records out 00:05:57.923 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155763 s, 67.3 MB/s 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.923 01:50:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:58.184 256+0 records in 00:05:58.184 256+0 records out 00:05:58.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163878 s, 64.0 MB/s 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.184 01:50:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.444 01:50:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.704 01:50:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.704 01:50:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.965 01:50:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:59.226 [2024-12-07 01:50:04.495193] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.226 [2024-12-07 01:50:04.538328] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.226 [2024-12-07 01:50:04.538335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.226 [2024-12-07 01:50:04.579384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.226 [2024-12-07 01:50:04.579448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:02.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:02.520 01:50:07 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70087 /var/tmp/spdk-nbd.sock 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70087 ']' 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:02.520 01:50:07 event.app_repeat -- event/event.sh@39 -- # killprocess 70087 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70087 ']' 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70087 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70087 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70087' 00:06:02.520 killing process with pid 70087 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70087 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70087 00:06:02.520 spdk_app_start is called in Round 0. 00:06:02.520 Shutdown signal received, stop current app iteration 00:06:02.520 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:02.520 spdk_app_start is called in Round 1. 00:06:02.520 Shutdown signal received, stop current app iteration 00:06:02.520 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:02.520 spdk_app_start is called in Round 2. 00:06:02.520 Shutdown signal received, stop current app iteration 00:06:02.520 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:06:02.520 spdk_app_start is called in Round 3. 00:06:02.520 Shutdown signal received, stop current app iteration 00:06:02.520 01:50:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:02.520 01:50:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:02.520 00:06:02.520 real 0m17.525s 00:06:02.520 user 0m38.729s 00:06:02.520 sys 0m2.632s 00:06:02.520 ************************************ 00:06:02.520 END TEST app_repeat 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.520 01:50:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 ************************************ 00:06:02.520 01:50:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:02.520 01:50:07 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.520 01:50:07 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.520 01:50:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.520 01:50:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.520 ************************************ 00:06:02.520 START TEST cpu_locks 00:06:02.520 ************************************ 00:06:02.520 01:50:07 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:02.780 * Looking for test storage... 00:06:02.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.780 01:50:08 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.780 --rc genhtml_branch_coverage=1 00:06:02.780 --rc genhtml_function_coverage=1 00:06:02.780 --rc genhtml_legend=1 00:06:02.780 --rc geninfo_all_blocks=1 00:06:02.780 --rc geninfo_unexecuted_blocks=1 00:06:02.780 00:06:02.780 ' 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.780 --rc genhtml_branch_coverage=1 00:06:02.780 --rc genhtml_function_coverage=1 00:06:02.780 --rc genhtml_legend=1 00:06:02.780 --rc geninfo_all_blocks=1 00:06:02.780 --rc geninfo_unexecuted_blocks=1 00:06:02.780 00:06:02.780 ' 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.780 --rc genhtml_branch_coverage=1 00:06:02.780 --rc genhtml_function_coverage=1 00:06:02.780 --rc genhtml_legend=1 00:06:02.780 --rc geninfo_all_blocks=1 00:06:02.780 --rc geninfo_unexecuted_blocks=1 00:06:02.780 00:06:02.780 ' 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:02.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.780 --rc genhtml_branch_coverage=1 00:06:02.780 --rc genhtml_function_coverage=1 00:06:02.780 --rc genhtml_legend=1 00:06:02.780 --rc geninfo_all_blocks=1 00:06:02.780 --rc geninfo_unexecuted_blocks=1 00:06:02.780 00:06:02.780 ' 00:06:02.780 01:50:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:02.780 01:50:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:02.780 01:50:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:02.780 01:50:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.780 01:50:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.780 ************************************ 00:06:02.780 START TEST default_locks 00:06:02.780 ************************************ 00:06:02.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70518 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70518 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70518 ']' 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.780 01:50:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.780 [2024-12-07 01:50:08.215064] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:02.780 [2024-12-07 01:50:08.215289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70518 ] 00:06:03.040 [2024-12-07 01:50:08.357548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.040 [2024-12-07 01:50:08.401223] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.609 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:03.609 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:03.609 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70518 00:06:03.609 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.609 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70518 00:06:03.877 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70518 00:06:03.877 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70518 ']' 00:06:03.877 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70518 00:06:03.877 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:03.877 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.877 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70518 00:06:04.156 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:04.156 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:04.156 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70518' 00:06:04.156 killing process with pid 70518 00:06:04.156 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70518 00:06:04.156 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70518 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70518 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70518 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70518 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70518 ']' 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.433 ERROR: process (pid: 70518) is no longer running 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.433 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70518) - No such process 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.433 00:06:04.433 real 0m1.603s 00:06:04.433 user 0m1.551s 00:06:04.433 sys 0m0.541s 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:04.433 01:50:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.433 ************************************ 00:06:04.433 END TEST default_locks 00:06:04.433 ************************************ 00:06:04.433 01:50:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.433 01:50:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:04.433 01:50:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:04.433 01:50:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.433 ************************************ 00:06:04.433 START TEST default_locks_via_rpc 00:06:04.433 ************************************ 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70564 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70564 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70564 ']' 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.433 01:50:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.693 [2024-12-07 01:50:09.918182] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:04.693 [2024-12-07 01:50:09.918494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70564 ] 00:06:04.693 [2024-12-07 01:50:10.063517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.693 [2024-12-07 01:50:10.109015] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70564 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.632 01:50:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70564 00:06:05.632 01:50:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70564 00:06:05.632 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70564 ']' 00:06:05.632 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70564 00:06:05.632 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70564 00:06:05.891 killing process with pid 70564 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70564' 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70564 00:06:05.891 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70564 00:06:06.151 00:06:06.151 real 0m1.713s 00:06:06.151 user 0m1.734s 00:06:06.151 sys 0m0.568s 00:06:06.151 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.151 01:50:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.151 ************************************ 00:06:06.151 END TEST default_locks_via_rpc 00:06:06.151 ************************************ 00:06:06.151 01:50:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:06.151 01:50:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.151 01:50:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.151 01:50:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.151 ************************************ 00:06:06.151 START TEST non_locking_app_on_locked_coremask 00:06:06.151 ************************************ 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70612 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70612 /var/tmp/spdk.sock 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70612 ']' 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:06.151 01:50:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.412 [2024-12-07 01:50:11.674386] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:06.412 [2024-12-07 01:50:11.674616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70612 ] 00:06:06.412 [2024-12-07 01:50:11.817505] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.412 [2024-12-07 01:50:11.860962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70628 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70628 /var/tmp/spdk2.sock 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70628 ']' 00:06:07.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.352 01:50:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.352 [2024-12-07 01:50:12.571875] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:07.352 [2024-12-07 01:50:12.571993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70628 ] 00:06:07.352 [2024-12-07 01:50:12.706589] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.352 [2024-12-07 01:50:12.706640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.352 [2024-12-07 01:50:12.795158] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.292 01:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.292 01:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:08.292 01:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70612 00:06:08.292 01:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70612 00:06:08.292 01:50:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70612 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70612 ']' 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70612 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70612 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.862 killing process with pid 70612 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70612' 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70612 00:06:08.862 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70612 00:06:09.432 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70628 00:06:09.432 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70628 ']' 00:06:09.432 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70628 00:06:09.432 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:09.432 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.432 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70628 00:06:09.691 killing process with pid 70628 00:06:09.691 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.691 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.691 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70628' 00:06:09.691 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70628 00:06:09.691 01:50:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70628 00:06:09.951 ************************************ 00:06:09.951 END TEST non_locking_app_on_locked_coremask 00:06:09.951 ************************************ 00:06:09.951 00:06:09.951 real 0m3.696s 00:06:09.951 user 0m3.854s 00:06:09.951 sys 0m1.144s 00:06:09.951 01:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:09.951 01:50:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.951 01:50:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:09.951 01:50:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.951 01:50:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.951 01:50:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.951 ************************************ 00:06:09.951 START TEST locking_app_on_unlocked_coremask 00:06:09.951 ************************************ 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70699 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70699 /var/tmp/spdk.sock 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70699 ']' 00:06:09.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.951 01:50:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.210 [2024-12-07 01:50:15.436599] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:10.210 [2024-12-07 01:50:15.436754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70699 ] 00:06:10.210 [2024-12-07 01:50:15.579463] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.210 [2024-12-07 01:50:15.579528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.210 [2024-12-07 01:50:15.622757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70709 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70709 /var/tmp/spdk2.sock 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70709 ']' 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.150 01:50:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.150 [2024-12-07 01:50:16.336371] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.150 [2024-12-07 01:50:16.336576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70709 ] 00:06:11.150 [2024-12-07 01:50:16.473013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.150 [2024-12-07 01:50:16.560888] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.721 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.721 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:11.721 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70709 00:06:11.721 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70709 00:06:11.721 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70699 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70699 ']' 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70699 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70699 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.290 killing process with pid 70699 00:06:12.290 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70699' 00:06:12.291 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70699 00:06:12.291 01:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70699 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70709 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70709 ']' 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70709 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70709 00:06:13.231 killing process with pid 70709 00:06:13.231 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.232 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.232 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70709' 00:06:13.232 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70709 00:06:13.232 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70709 00:06:13.491 ************************************ 00:06:13.491 END TEST locking_app_on_unlocked_coremask 00:06:13.491 ************************************ 00:06:13.491 00:06:13.491 real 0m3.440s 00:06:13.491 user 0m3.603s 00:06:13.491 sys 0m1.017s 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.491 01:50:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:13.491 01:50:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.491 01:50:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.491 01:50:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:13.491 ************************************ 00:06:13.491 START TEST locking_app_on_locked_coremask 00:06:13.491 ************************************ 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70773 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70773 /var/tmp/spdk.sock 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70773 ']' 00:06:13.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.491 01:50:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.491 [2024-12-07 01:50:18.944986] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:13.491 [2024-12-07 01:50:18.945116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70773 ] 00:06:13.751 [2024-12-07 01:50:19.088574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.751 [2024-12-07 01:50:19.132515] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70789 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70789 /var/tmp/spdk2.sock 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70789 /var/tmp/spdk2.sock 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70789 /var/tmp/spdk2.sock 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70789 ']' 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.321 01:50:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:14.581 [2024-12-07 01:50:19.818915] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.582 [2024-12-07 01:50:19.819124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70789 ] 00:06:14.582 [2024-12-07 01:50:19.953834] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70773 has claimed it. 00:06:14.582 [2024-12-07 01:50:19.953921] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:15.151 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70789) - No such process 00:06:15.151 ERROR: process (pid: 70789) is no longer running 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70773 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70773 00:06:15.151 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70773 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70773 ']' 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70773 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70773 00:06:15.410 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.670 killing process with pid 70773 00:06:15.670 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.670 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70773' 00:06:15.670 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70773 00:06:15.670 01:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70773 00:06:15.930 ************************************ 00:06:15.930 END TEST locking_app_on_locked_coremask 00:06:15.930 ************************************ 00:06:15.930 00:06:15.930 real 0m2.400s 00:06:15.930 user 0m2.565s 00:06:15.930 sys 0m0.682s 00:06:15.930 01:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.930 01:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 01:50:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:15.930 01:50:21 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.930 01:50:21 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.930 01:50:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.930 ************************************ 00:06:15.930 START TEST locking_overlapped_coremask 00:06:15.930 ************************************ 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70831 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70831 /var/tmp/spdk.sock 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70831 ']' 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.930 01:50:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.191 [2024-12-07 01:50:21.416218] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:16.191 [2024-12-07 01:50:21.416433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70831 ] 00:06:16.191 [2024-12-07 01:50:21.563648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.191 [2024-12-07 01:50:21.608868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.191 [2024-12-07 01:50:21.608945] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.191 [2024-12-07 01:50:21.609060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.761 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70849 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70849 /var/tmp/spdk2.sock 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70849 /var/tmp/spdk2.sock 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 70849 /var/tmp/spdk2.sock 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 70849 ']' 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:17.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.021 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.021 [2024-12-07 01:50:22.312517] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:17.021 [2024-12-07 01:50:22.312773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70849 ] 00:06:17.021 [2024-12-07 01:50:22.455020] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70831 has claimed it. 00:06:17.021 [2024-12-07 01:50:22.455093] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:17.592 ERROR: process (pid: 70849) is no longer running 00:06:17.592 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70849) - No such process 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70831 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 70831 ']' 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 70831 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70831 00:06:17.592 killing process with pid 70831 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70831' 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 70831 00:06:17.592 01:50:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 70831 00:06:18.164 00:06:18.164 real 0m2.039s 00:06:18.164 user 0m5.417s 00:06:18.164 sys 0m0.495s 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.164 ************************************ 00:06:18.164 END TEST locking_overlapped_coremask 00:06:18.164 ************************************ 00:06:18.164 01:50:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:18.164 01:50:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.164 01:50:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.164 01:50:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.164 ************************************ 00:06:18.164 START TEST locking_overlapped_coremask_via_rpc 00:06:18.164 ************************************ 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70891 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70891 /var/tmp/spdk.sock 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70891 ']' 00:06:18.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.164 01:50:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.164 [2024-12-07 01:50:23.521289] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.164 [2024-12-07 01:50:23.521409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70891 ] 00:06:18.424 [2024-12-07 01:50:23.667854] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.424 [2024-12-07 01:50:23.667997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.424 [2024-12-07 01:50:23.713308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.424 [2024-12-07 01:50:23.713464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.424 [2024-12-07 01:50:23.713533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70909 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70909 /var/tmp/spdk2.sock 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70909 ']' 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.993 01:50:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.993 [2024-12-07 01:50:24.417282] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:18.993 [2024-12-07 01:50:24.417493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70909 ] 00:06:19.282 [2024-12-07 01:50:24.551430] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.282 [2024-12-07 01:50:24.551478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.282 [2024-12-07 01:50:24.659387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.282 [2024-12-07 01:50:24.659473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.282 [2024-12-07 01:50:24.659571] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.853 [2024-12-07 01:50:25.278874] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70891 has claimed it. 00:06:19.853 request: 00:06:19.853 { 00:06:19.853 "method": "framework_enable_cpumask_locks", 00:06:19.853 "req_id": 1 00:06:19.853 } 00:06:19.853 Got JSON-RPC error response 00:06:19.853 response: 00:06:19.853 { 00:06:19.853 "code": -32603, 00:06:19.853 "message": "Failed to claim CPU core: 2" 00:06:19.853 } 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70891 /var/tmp/spdk.sock 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70891 ']' 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.853 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70909 /var/tmp/spdk2.sock 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70909 ']' 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.112 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.371 00:06:20.371 real 0m2.295s 00:06:20.371 user 0m1.061s 00:06:20.371 sys 0m0.163s 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.371 01:50:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.371 ************************************ 00:06:20.371 END TEST locking_overlapped_coremask_via_rpc 00:06:20.371 ************************************ 00:06:20.371 01:50:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:20.371 01:50:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70891 ]] 00:06:20.371 01:50:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70891 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70891 ']' 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70891 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70891 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.371 killing process with pid 70891 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70891' 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70891 00:06:20.371 01:50:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70891 00:06:20.939 01:50:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70909 ]] 00:06:20.939 01:50:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70909 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70909 ']' 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70909 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70909 00:06:20.939 killing process with pid 70909 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70909' 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 70909 00:06:20.939 01:50:26 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 70909 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70891 ]] 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70891 00:06:21.198 01:50:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70891 ']' 00:06:21.198 01:50:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70891 00:06:21.198 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70891) - No such process 00:06:21.198 Process with pid 70891 is not found 00:06:21.198 01:50:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70891 is not found' 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70909 ]] 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70909 00:06:21.198 01:50:26 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 70909 ']' 00:06:21.198 01:50:26 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 70909 00:06:21.198 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (70909) - No such process 00:06:21.198 Process with pid 70909 is not found 00:06:21.198 01:50:26 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 70909 is not found' 00:06:21.198 01:50:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.199 00:06:21.199 real 0m18.763s 00:06:21.199 user 0m31.173s 00:06:21.199 sys 0m5.724s 00:06:21.199 01:50:26 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.199 ************************************ 00:06:21.199 END TEST cpu_locks 00:06:21.199 ************************************ 00:06:21.199 01:50:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.457 ************************************ 00:06:21.457 END TEST event 00:06:21.457 ************************************ 00:06:21.457 00:06:21.457 real 0m46.985s 00:06:21.457 user 1m30.334s 00:06:21.457 sys 0m9.523s 00:06:21.457 01:50:26 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.457 01:50:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.457 01:50:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.457 01:50:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.457 01:50:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.457 01:50:26 -- common/autotest_common.sh@10 -- # set +x 00:06:21.457 ************************************ 00:06:21.457 START TEST thread 00:06:21.457 ************************************ 00:06:21.457 01:50:26 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.457 * Looking for test storage... 00:06:21.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:21.457 01:50:26 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.457 01:50:26 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.457 01:50:26 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.715 01:50:26 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.715 01:50:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.715 01:50:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.715 01:50:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.715 01:50:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.715 01:50:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.715 01:50:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.715 01:50:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.715 01:50:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.715 01:50:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.715 01:50:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.715 01:50:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.715 01:50:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:21.715 01:50:26 thread -- scripts/common.sh@345 -- # : 1 00:06:21.715 01:50:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.715 01:50:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.715 01:50:26 thread -- scripts/common.sh@365 -- # decimal 1 00:06:21.715 01:50:26 thread -- scripts/common.sh@353 -- # local d=1 00:06:21.715 01:50:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.715 01:50:26 thread -- scripts/common.sh@355 -- # echo 1 00:06:21.715 01:50:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.715 01:50:26 thread -- scripts/common.sh@366 -- # decimal 2 00:06:21.715 01:50:26 thread -- scripts/common.sh@353 -- # local d=2 00:06:21.715 01:50:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.715 01:50:26 thread -- scripts/common.sh@355 -- # echo 2 00:06:21.715 01:50:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.715 01:50:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.715 01:50:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.715 01:50:26 thread -- scripts/common.sh@368 -- # return 0 00:06:21.715 01:50:26 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.715 01:50:26 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.715 --rc genhtml_branch_coverage=1 00:06:21.715 --rc genhtml_function_coverage=1 00:06:21.715 --rc genhtml_legend=1 00:06:21.715 --rc geninfo_all_blocks=1 00:06:21.715 --rc geninfo_unexecuted_blocks=1 00:06:21.715 00:06:21.715 ' 00:06:21.715 01:50:26 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.715 --rc genhtml_branch_coverage=1 00:06:21.715 --rc genhtml_function_coverage=1 00:06:21.715 --rc genhtml_legend=1 00:06:21.715 --rc geninfo_all_blocks=1 00:06:21.715 --rc geninfo_unexecuted_blocks=1 00:06:21.715 00:06:21.715 ' 00:06:21.715 01:50:26 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.715 --rc genhtml_branch_coverage=1 00:06:21.715 --rc genhtml_function_coverage=1 00:06:21.715 --rc genhtml_legend=1 00:06:21.715 --rc geninfo_all_blocks=1 00:06:21.715 --rc geninfo_unexecuted_blocks=1 00:06:21.716 00:06:21.716 ' 00:06:21.716 01:50:26 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.716 --rc genhtml_branch_coverage=1 00:06:21.716 --rc genhtml_function_coverage=1 00:06:21.716 --rc genhtml_legend=1 00:06:21.716 --rc geninfo_all_blocks=1 00:06:21.716 --rc geninfo_unexecuted_blocks=1 00:06:21.716 00:06:21.716 ' 00:06:21.716 01:50:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.716 01:50:26 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:21.716 01:50:26 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.716 01:50:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.716 ************************************ 00:06:21.716 START TEST thread_poller_perf 00:06:21.716 ************************************ 00:06:21.716 01:50:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.716 [2024-12-07 01:50:27.042714] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:21.716 [2024-12-07 01:50:27.042872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71049 ] 00:06:21.974 [2024-12-07 01:50:27.187924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.974 [2024-12-07 01:50:27.231873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.974 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.910 [2024-12-07T01:50:28.372Z] ====================================== 00:06:22.910 [2024-12-07T01:50:28.372Z] busy:2297982346 (cyc) 00:06:22.910 [2024-12-07T01:50:28.372Z] total_run_count: 418000 00:06:22.910 [2024-12-07T01:50:28.372Z] tsc_hz: 2290000000 (cyc) 00:06:22.910 [2024-12-07T01:50:28.372Z] ====================================== 00:06:22.910 [2024-12-07T01:50:28.372Z] poller_cost: 5497 (cyc), 2400 (nsec) 00:06:22.910 00:06:22.910 real 0m1.322s 00:06:22.910 user 0m1.138s 00:06:22.910 sys 0m0.078s 00:06:22.910 01:50:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.910 ************************************ 00:06:22.910 END TEST thread_poller_perf 00:06:22.910 ************************************ 00:06:22.910 01:50:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.170 01:50:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.170 01:50:28 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:23.170 01:50:28 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.170 01:50:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.170 ************************************ 00:06:23.170 START TEST thread_poller_perf 00:06:23.170 ************************************ 00:06:23.170 01:50:28 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.170 [2024-12-07 01:50:28.441077] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:23.170 [2024-12-07 01:50:28.441241] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71080 ] 00:06:23.170 [2024-12-07 01:50:28.583654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.170 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.170 [2024-12-07 01:50:28.628420] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.549 [2024-12-07T01:50:30.011Z] ====================================== 00:06:24.549 [2024-12-07T01:50:30.011Z] busy:2293404314 (cyc) 00:06:24.549 [2024-12-07T01:50:30.011Z] total_run_count: 5546000 00:06:24.549 [2024-12-07T01:50:30.011Z] tsc_hz: 2290000000 (cyc) 00:06:24.549 [2024-12-07T01:50:30.011Z] ====================================== 00:06:24.549 [2024-12-07T01:50:30.011Z] poller_cost: 413 (cyc), 180 (nsec) 00:06:24.549 00:06:24.549 real 0m1.319s 00:06:24.549 user 0m1.138s 00:06:24.549 sys 0m0.075s 00:06:24.549 ************************************ 00:06:24.549 01:50:29 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.549 01:50:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.549 END TEST thread_poller_perf 00:06:24.549 ************************************ 00:06:24.549 01:50:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.549 ************************************ 00:06:24.549 END TEST thread 00:06:24.549 ************************************ 00:06:24.549 00:06:24.549 real 0m3.010s 00:06:24.549 user 0m2.430s 00:06:24.549 sys 0m0.373s 00:06:24.549 01:50:29 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.549 01:50:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.549 01:50:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:24.549 01:50:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.549 01:50:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.549 01:50:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.549 01:50:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.549 ************************************ 00:06:24.549 START TEST app_cmdline 00:06:24.550 ************************************ 00:06:24.550 01:50:29 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.550 * Looking for test storage... 00:06:24.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.550 01:50:29 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.550 01:50:29 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.550 01:50:29 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.809 01:50:30 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.809 01:50:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:24.809 01:50:30 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.809 01:50:30 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.809 --rc genhtml_branch_coverage=1 00:06:24.809 --rc genhtml_function_coverage=1 00:06:24.809 --rc genhtml_legend=1 00:06:24.809 --rc geninfo_all_blocks=1 00:06:24.809 --rc geninfo_unexecuted_blocks=1 00:06:24.809 00:06:24.809 ' 00:06:24.809 01:50:30 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.809 --rc genhtml_branch_coverage=1 00:06:24.809 --rc genhtml_function_coverage=1 00:06:24.809 --rc genhtml_legend=1 00:06:24.809 --rc geninfo_all_blocks=1 00:06:24.809 --rc geninfo_unexecuted_blocks=1 00:06:24.809 00:06:24.809 ' 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.810 --rc genhtml_branch_coverage=1 00:06:24.810 --rc genhtml_function_coverage=1 00:06:24.810 --rc genhtml_legend=1 00:06:24.810 --rc geninfo_all_blocks=1 00:06:24.810 --rc geninfo_unexecuted_blocks=1 00:06:24.810 00:06:24.810 ' 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.810 --rc genhtml_branch_coverage=1 00:06:24.810 --rc genhtml_function_coverage=1 00:06:24.810 --rc genhtml_legend=1 00:06:24.810 --rc geninfo_all_blocks=1 00:06:24.810 --rc geninfo_unexecuted_blocks=1 00:06:24.810 00:06:24.810 ' 00:06:24.810 01:50:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.810 01:50:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71169 00:06:24.810 01:50:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.810 01:50:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71169 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71169 ']' 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.810 01:50:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.810 [2024-12-07 01:50:30.185951] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:24.810 [2024-12-07 01:50:30.186118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:06:25.069 [2024-12-07 01:50:30.349716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.069 [2024-12-07 01:50:30.393893] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.639 01:50:30 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.639 01:50:30 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:25.639 01:50:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:25.899 { 00:06:25.899 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:06:25.899 "fields": { 00:06:25.899 "major": 24, 00:06:25.899 "minor": 9, 00:06:25.899 "patch": 1, 00:06:25.899 "suffix": "-pre", 00:06:25.899 "commit": "b18e1bd62" 00:06:25.899 } 00:06:25.899 } 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:25.899 01:50:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:25.899 01:50:31 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:26.160 request: 00:06:26.160 { 00:06:26.160 "method": "env_dpdk_get_mem_stats", 00:06:26.160 "req_id": 1 00:06:26.160 } 00:06:26.160 Got JSON-RPC error response 00:06:26.160 response: 00:06:26.160 { 00:06:26.160 "code": -32601, 00:06:26.160 "message": "Method not found" 00:06:26.160 } 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.160 01:50:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71169 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71169 ']' 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71169 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71169 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71169' 00:06:26.160 killing process with pid 71169 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@969 -- # kill 71169 00:06:26.160 01:50:31 app_cmdline -- common/autotest_common.sh@974 -- # wait 71169 00:06:26.419 00:06:26.419 real 0m2.032s 00:06:26.419 user 0m2.258s 00:06:26.420 sys 0m0.574s 00:06:26.420 01:50:31 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.420 01:50:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.420 ************************************ 00:06:26.420 END TEST app_cmdline 00:06:26.420 ************************************ 00:06:26.680 01:50:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:26.680 01:50:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.680 01:50:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.680 01:50:31 -- common/autotest_common.sh@10 -- # set +x 00:06:26.680 ************************************ 00:06:26.680 START TEST version 00:06:26.680 ************************************ 00:06:26.680 01:50:31 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:26.680 * Looking for test storage... 00:06:26.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.680 01:50:32 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:26.680 01:50:32 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:26.680 01:50:32 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:26.680 01:50:32 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:26.680 01:50:32 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.680 01:50:32 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.680 01:50:32 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.680 01:50:32 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.680 01:50:32 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.680 01:50:32 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.680 01:50:32 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.680 01:50:32 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.680 01:50:32 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.940 01:50:32 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.940 01:50:32 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.940 01:50:32 version -- scripts/common.sh@344 -- # case "$op" in 00:06:26.940 01:50:32 version -- scripts/common.sh@345 -- # : 1 00:06:26.940 01:50:32 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.940 01:50:32 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.940 01:50:32 version -- scripts/common.sh@365 -- # decimal 1 00:06:26.940 01:50:32 version -- scripts/common.sh@353 -- # local d=1 00:06:26.940 01:50:32 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.940 01:50:32 version -- scripts/common.sh@355 -- # echo 1 00:06:26.940 01:50:32 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.940 01:50:32 version -- scripts/common.sh@366 -- # decimal 2 00:06:26.940 01:50:32 version -- scripts/common.sh@353 -- # local d=2 00:06:26.940 01:50:32 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.940 01:50:32 version -- scripts/common.sh@355 -- # echo 2 00:06:26.940 01:50:32 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.940 01:50:32 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.940 01:50:32 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.940 01:50:32 version -- scripts/common.sh@368 -- # return 0 00:06:26.940 01:50:32 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.940 01:50:32 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:26.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.940 --rc genhtml_branch_coverage=1 00:06:26.940 --rc genhtml_function_coverage=1 00:06:26.940 --rc genhtml_legend=1 00:06:26.940 --rc geninfo_all_blocks=1 00:06:26.940 --rc geninfo_unexecuted_blocks=1 00:06:26.940 00:06:26.940 ' 00:06:26.940 01:50:32 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:26.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.940 --rc genhtml_branch_coverage=1 00:06:26.940 --rc genhtml_function_coverage=1 00:06:26.940 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 01:50:32 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:26.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.941 --rc genhtml_branch_coverage=1 00:06:26.941 --rc genhtml_function_coverage=1 00:06:26.941 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 01:50:32 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:26.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.941 --rc genhtml_branch_coverage=1 00:06:26.941 --rc genhtml_function_coverage=1 00:06:26.941 --rc genhtml_legend=1 00:06:26.941 --rc geninfo_all_blocks=1 00:06:26.941 --rc geninfo_unexecuted_blocks=1 00:06:26.941 00:06:26.941 ' 00:06:26.941 01:50:32 version -- app/version.sh@17 -- # get_header_version major 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # cut -f2 00:06:26.941 01:50:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.941 01:50:32 version -- app/version.sh@17 -- # major=24 00:06:26.941 01:50:32 version -- app/version.sh@18 -- # get_header_version minor 00:06:26.941 01:50:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # cut -f2 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.941 01:50:32 version -- app/version.sh@18 -- # minor=9 00:06:26.941 01:50:32 version -- app/version.sh@19 -- # get_header_version patch 00:06:26.941 01:50:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # cut -f2 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.941 01:50:32 version -- app/version.sh@19 -- # patch=1 00:06:26.941 01:50:32 version -- app/version.sh@20 -- # get_header_version suffix 00:06:26.941 01:50:32 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # cut -f2 00:06:26.941 01:50:32 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.941 01:50:32 version -- app/version.sh@20 -- # suffix=-pre 00:06:26.941 01:50:32 version -- app/version.sh@22 -- # version=24.9 00:06:26.941 01:50:32 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:26.941 01:50:32 version -- app/version.sh@25 -- # version=24.9.1 00:06:26.941 01:50:32 version -- app/version.sh@28 -- # version=24.9.1rc0 00:06:26.941 01:50:32 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:26.941 01:50:32 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:26.941 01:50:32 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:06:26.941 01:50:32 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:06:26.941 00:06:26.941 real 0m0.312s 00:06:26.941 user 0m0.209s 00:06:26.941 sys 0m0.158s 00:06:26.941 ************************************ 00:06:26.941 END TEST version 00:06:26.941 ************************************ 00:06:26.941 01:50:32 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.941 01:50:32 version -- common/autotest_common.sh@10 -- # set +x 00:06:26.941 01:50:32 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:26.941 01:50:32 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:26.941 01:50:32 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:26.941 01:50:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.941 01:50:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.941 01:50:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.941 ************************************ 00:06:26.941 START TEST bdev_raid 00:06:26.941 ************************************ 00:06:26.941 01:50:32 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:27.201 * Looking for test storage... 00:06:27.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.201 01:50:32 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.201 --rc genhtml_branch_coverage=1 00:06:27.201 --rc genhtml_function_coverage=1 00:06:27.201 --rc genhtml_legend=1 00:06:27.201 --rc geninfo_all_blocks=1 00:06:27.201 --rc geninfo_unexecuted_blocks=1 00:06:27.201 00:06:27.201 ' 00:06:27.201 01:50:32 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.202 --rc genhtml_branch_coverage=1 00:06:27.202 --rc genhtml_function_coverage=1 00:06:27.202 --rc genhtml_legend=1 00:06:27.202 --rc geninfo_all_blocks=1 00:06:27.202 --rc geninfo_unexecuted_blocks=1 00:06:27.202 00:06:27.202 ' 00:06:27.202 01:50:32 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.202 --rc genhtml_branch_coverage=1 00:06:27.202 --rc genhtml_function_coverage=1 00:06:27.202 --rc genhtml_legend=1 00:06:27.202 --rc geninfo_all_blocks=1 00:06:27.202 --rc geninfo_unexecuted_blocks=1 00:06:27.202 00:06:27.202 ' 00:06:27.202 01:50:32 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.202 --rc genhtml_branch_coverage=1 00:06:27.202 --rc genhtml_function_coverage=1 00:06:27.202 --rc genhtml_legend=1 00:06:27.202 --rc geninfo_all_blocks=1 00:06:27.202 --rc geninfo_unexecuted_blocks=1 00:06:27.202 00:06:27.202 ' 00:06:27.202 01:50:32 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:27.202 01:50:32 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:27.202 01:50:32 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:27.202 01:50:32 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:27.202 01:50:32 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:27.202 01:50:32 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:27.202 01:50:32 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:27.202 01:50:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.202 01:50:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.202 01:50:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:27.202 ************************************ 00:06:27.202 START TEST raid1_resize_data_offset_test 00:06:27.202 ************************************ 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71329 00:06:27.202 Process raid pid: 71329 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71329' 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71329 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 71329 ']' 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.202 01:50:32 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:27.202 [2024-12-07 01:50:32.627513] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:27.202 [2024-12-07 01:50:32.627633] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.462 [2024-12-07 01:50:32.771682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.462 [2024-12-07 01:50:32.815215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.462 [2024-12-07 01:50:32.856315] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.462 [2024-12-07 01:50:32.856352] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.031 malloc0 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.031 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 malloc1 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 null0 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 [2024-12-07 01:50:33.530000] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:28.291 [2024-12-07 01:50:33.531807] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:28.291 [2024-12-07 01:50:33.531874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:28.291 [2024-12-07 01:50:33.532015] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:28.291 [2024-12-07 01:50:33.532026] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:28.291 [2024-12-07 01:50:33.532266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:28.291 [2024-12-07 01:50:33.532387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:28.291 [2024-12-07 01:50:33.532403] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:28.291 [2024-12-07 01:50:33.532543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 [2024-12-07 01:50:33.589871] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 malloc2 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 [2024-12-07 01:50:33.719712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:28.291 [2024-12-07 01:50:33.724497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.291 [2024-12-07 01:50:33.727105] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.291 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71329 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 71329 ']' 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 71329 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71329 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:28.551 killing process with pid 71329 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71329' 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 71329 00:06:28.551 [2024-12-07 01:50:33.818759] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:28.551 01:50:33 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 71329 00:06:28.551 [2024-12-07 01:50:33.818943] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:28.551 [2024-12-07 01:50:33.818988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.551 [2024-12-07 01:50:33.819005] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:28.551 [2024-12-07 01:50:33.824199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:28.551 [2024-12-07 01:50:33.824474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:28.551 [2024-12-07 01:50:33.824489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:28.811 [2024-12-07 01:50:34.032703] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:29.071 01:50:34 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:29.071 00:06:29.071 real 0m1.713s 00:06:29.071 user 0m1.699s 00:06:29.071 sys 0m0.445s 00:06:29.071 01:50:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.071 01:50:34 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.071 ************************************ 00:06:29.071 END TEST raid1_resize_data_offset_test 00:06:29.071 ************************************ 00:06:29.071 01:50:34 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:29.071 01:50:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:29.071 01:50:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.071 01:50:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.071 ************************************ 00:06:29.071 START TEST raid0_resize_superblock_test 00:06:29.071 ************************************ 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71385 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71385' 00:06:29.071 Process raid pid: 71385 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71385 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71385 ']' 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.071 01:50:34 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.071 [2024-12-07 01:50:34.417432] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:29.071 [2024-12-07 01:50:34.417561] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.330 [2024-12-07 01:50:34.562554] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.330 [2024-12-07 01:50:34.605994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.330 [2024-12-07 01:50:34.647162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.330 [2024-12-07 01:50:34.647201] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.898 malloc0 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:29.898 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 [2024-12-07 01:50:35.364248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:30.157 [2024-12-07 01:50:35.364306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.157 [2024-12-07 01:50:35.364334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:30.157 [2024-12-07 01:50:35.364346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.157 [2024-12-07 01:50:35.366435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.157 [2024-12-07 01:50:35.366473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:30.157 pt0 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 b7573223-c031-4b0c-ae4c-c868a21b95e5 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 b8ad9409-2781-489f-991d-b6a2414a02f9 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 7fccbd28-ba15-4d8f-8dfd-240f04880939 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 [2024-12-07 01:50:35.498898] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b8ad9409-2781-489f-991d-b6a2414a02f9 is claimed 00:06:30.157 [2024-12-07 01:50:35.498976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7fccbd28-ba15-4d8f-8dfd-240f04880939 is claimed 00:06:30.157 [2024-12-07 01:50:35.499080] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:30.157 [2024-12-07 01:50:35.499092] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:30.157 [2024-12-07 01:50:35.499349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:30.157 [2024-12-07 01:50:35.499487] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:30.157 [2024-12-07 01:50:35.499509] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:30.157 [2024-12-07 01:50:35.499656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:30.157 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.158 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.158 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.158 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.158 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.158 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:30.158 [2024-12-07 01:50:35.606907] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.418 [2024-12-07 01:50:35.654755] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.418 [2024-12-07 01:50:35.654781] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b8ad9409-2781-489f-991d-b6a2414a02f9' was resized: old size 131072, new size 204800 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.418 [2024-12-07 01:50:35.666649] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.418 [2024-12-07 01:50:35.666689] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '7fccbd28-ba15-4d8f-8dfd-240f04880939' was resized: old size 131072, new size 204800 00:06:30.418 [2024-12-07 01:50:35.666708] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.418 [2024-12-07 01:50:35.770578] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.418 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.418 [2024-12-07 01:50:35.798366] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:30.418 [2024-12-07 01:50:35.798420] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:30.418 [2024-12-07 01:50:35.798433] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:30.418 [2024-12-07 01:50:35.798452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:30.418 [2024-12-07 01:50:35.798541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.419 [2024-12-07 01:50:35.798577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.419 [2024-12-07 01:50:35.798589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.419 [2024-12-07 01:50:35.810287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:30.419 [2024-12-07 01:50:35.810326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:30.419 [2024-12-07 01:50:35.810342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:30.419 [2024-12-07 01:50:35.810351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:30.419 [2024-12-07 01:50:35.812495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:30.419 [2024-12-07 01:50:35.812531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:30.419 [2024-12-07 01:50:35.813964] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b8ad9409-2781-489f-991d-b6a2414a02f9 00:06:30.419 [2024-12-07 01:50:35.814013] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev b8ad9409-2781-489f-991d-b6a2414a02f9 is claimed 00:06:30.419 [2024-12-07 01:50:35.814105] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 7fccbd28-ba15-4d8f-8dfd-240f04880939 00:06:30.419 [2024-12-07 01:50:35.814126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 7fccbd28-ba15-4d8f-8dfd-240f04880939 is claimed 00:06:30.419 [2024-12-07 01:50:35.814199] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 7fccbd28-ba15-4d8f-8dfd-240f04880939 (2) smaller than existing raid bdev Raid (3) 00:06:30.419 [2024-12-07 01:50:35.814225] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b8ad9409-2781-489f-991d-b6a2414a02f9: File exists 00:06:30.419 [2024-12-07 01:50:35.814255] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:30.419 [2024-12-07 01:50:35.814263] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:30.419 [2024-12-07 01:50:35.814490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:30.419 [2024-12-07 01:50:35.814631] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:30.419 [2024-12-07 01:50:35.814649] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:30.419 [2024-12-07 01:50:35.814802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.419 pt0 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.419 [2024-12-07 01:50:35.842978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71385 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71385 ']' 00:06:30.419 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71385 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71385 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.679 killing process with pid 71385 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71385' 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71385 00:06:30.679 [2024-12-07 01:50:35.920614] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.679 [2024-12-07 01:50:35.920671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.679 [2024-12-07 01:50:35.920722] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.679 [2024-12-07 01:50:35.920730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:30.679 01:50:35 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71385 00:06:30.679 [2024-12-07 01:50:36.078098] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.939 01:50:36 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:30.939 00:06:30.939 real 0m1.968s 00:06:30.939 user 0m2.220s 00:06:30.939 sys 0m0.500s 00:06:30.939 01:50:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.939 01:50:36 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.939 ************************************ 00:06:30.939 END TEST raid0_resize_superblock_test 00:06:30.939 ************************************ 00:06:30.939 01:50:36 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:30.939 01:50:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.939 01:50:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.939 01:50:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.939 ************************************ 00:06:30.939 START TEST raid1_resize_superblock_test 00:06:30.939 ************************************ 00:06:30.939 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:06:30.939 01:50:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:30.939 01:50:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71458 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.940 Process raid pid: 71458 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71458' 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71458 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 71458 ']' 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.940 01:50:36 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.199 [2024-12-07 01:50:36.452620] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:31.199 [2024-12-07 01:50:36.452755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.199 [2024-12-07 01:50:36.596903] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.200 [2024-12-07 01:50:36.640956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.460 [2024-12-07 01:50:36.682008] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.460 [2024-12-07 01:50:36.682051] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.030 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.030 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:32.030 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:32.030 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.030 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.030 malloc0 00:06:32.030 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.031 [2024-12-07 01:50:37.408506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:32.031 [2024-12-07 01:50:37.408572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.031 [2024-12-07 01:50:37.408594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:32.031 [2024-12-07 01:50:37.408605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.031 [2024-12-07 01:50:37.410684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.031 [2024-12-07 01:50:37.410715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:32.031 pt0 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.031 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 78817a86-09f4-44f4-81f2-c9e7b90c5a86 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 63c5d395-f849-4941-a150-9b386416f218 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 3ecfc459-47cb-449d-846a-987d456be9f2 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 [2024-12-07 01:50:37.543846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 63c5d395-f849-4941-a150-9b386416f218 is claimed 00:06:32.306 [2024-12-07 01:50:37.543931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3ecfc459-47cb-449d-846a-987d456be9f2 is claimed 00:06:32.306 [2024-12-07 01:50:37.544049] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:32.306 [2024-12-07 01:50:37.544065] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:32.306 [2024-12-07 01:50:37.544357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:32.306 [2024-12-07 01:50:37.544504] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:32.306 [2024-12-07 01:50:37.544525] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:32.306 [2024-12-07 01:50:37.544641] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 [2024-12-07 01:50:37.655857] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 [2024-12-07 01:50:37.699715] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.306 [2024-12-07 01:50:37.699743] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '63c5d395-f849-4941-a150-9b386416f218' was resized: old size 131072, new size 204800 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 [2024-12-07 01:50:37.707622] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:32.306 [2024-12-07 01:50:37.707649] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3ecfc459-47cb-449d-846a-987d456be9f2' was resized: old size 131072, new size 204800 00:06:32.306 [2024-12-07 01:50:37.707684] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:32.306 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.613 [2024-12-07 01:50:37.815559] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.613 [2024-12-07 01:50:37.839346] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:32.613 [2024-12-07 01:50:37.839408] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:32.613 [2024-12-07 01:50:37.839438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:32.613 [2024-12-07 01:50:37.839582] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:32.613 [2024-12-07 01:50:37.839742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.613 [2024-12-07 01:50:37.839808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.613 [2024-12-07 01:50:37.839823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.613 [2024-12-07 01:50:37.851268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:32.613 [2024-12-07 01:50:37.851326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:32.613 [2024-12-07 01:50:37.851343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:32.613 [2024-12-07 01:50:37.851355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:32.613 [2024-12-07 01:50:37.853449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:32.613 [2024-12-07 01:50:37.853480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:32.613 [2024-12-07 01:50:37.854773] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 63c5d395-f849-4941-a150-9b386416f218 00:06:32.613 [2024-12-07 01:50:37.854864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 63c5d395-f849-4941-a150-9b386416f218 is claimed 00:06:32.613 [2024-12-07 01:50:37.854945] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3ecfc459-47cb-449d-846a-987d456be9f2 00:06:32.613 [2024-12-07 01:50:37.854966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3ecfc459-47cb-449d-846a-987d456be9f2 is claimed 00:06:32.613 [2024-12-07 01:50:37.855046] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3ecfc459-47cb-449d-846a-987d456be9f2 (2) smaller than existing raid bdev Raid (3) 00:06:32.613 [2024-12-07 01:50:37.855073] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 63c5d395-f849-4941-a150-9b386416f218: File exists 00:06:32.613 [2024-12-07 01:50:37.855107] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:32.613 [2024-12-07 01:50:37.855115] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:32.613 [2024-12-07 01:50:37.855348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:32.613 [2024-12-07 01:50:37.855494] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:32.613 [2024-12-07 01:50:37.855513] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:32.613 [2024-12-07 01:50:37.855656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:32.613 pt0 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.613 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.614 [2024-12-07 01:50:37.879861] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71458 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 71458 ']' 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 71458 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71458 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:32.614 killing process with pid 71458 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71458' 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 71458 00:06:32.614 [2024-12-07 01:50:37.955497] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:32.614 [2024-12-07 01:50:37.955569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:32.614 [2024-12-07 01:50:37.955617] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:32.614 [2024-12-07 01:50:37.955626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:32.614 01:50:37 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 71458 00:06:32.875 [2024-12-07 01:50:38.113493] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.135 01:50:38 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:33.135 00:06:33.135 real 0m1.974s 00:06:33.135 user 0m2.249s 00:06:33.135 sys 0m0.477s 00:06:33.135 01:50:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.135 01:50:38 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.135 ************************************ 00:06:33.135 END TEST raid1_resize_superblock_test 00:06:33.135 ************************************ 00:06:33.135 01:50:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:33.135 01:50:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:33.135 01:50:38 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:33.135 01:50:38 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:33.135 01:50:38 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:33.135 01:50:38 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:33.135 01:50:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:33.135 01:50:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.135 01:50:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.135 ************************************ 00:06:33.135 START TEST raid_function_test_raid0 00:06:33.135 ************************************ 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71534 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.135 Process raid pid: 71534 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71534' 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71534 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 71534 ']' 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.135 01:50:38 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:33.135 [2024-12-07 01:50:38.521995] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:33.135 [2024-12-07 01:50:38.522129] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.394 [2024-12-07 01:50:38.669323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.394 [2024-12-07 01:50:38.713524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.394 [2024-12-07 01:50:38.754561] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.394 [2024-12-07 01:50:38.754615] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:33.964 Base_1 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:33.964 Base_2 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:33.964 [2024-12-07 01:50:39.404623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:33.964 [2024-12-07 01:50:39.407913] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:33.964 [2024-12-07 01:50:39.408028] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:33.964 [2024-12-07 01:50:39.408050] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:33.964 [2024-12-07 01:50:39.408540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:33.964 [2024-12-07 01:50:39.408788] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:33.964 [2024-12-07 01:50:39.408819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:33.964 [2024-12-07 01:50:39.409120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:33.964 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:34.225 [2024-12-07 01:50:39.636693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:34.225 /dev/nbd0 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.225 1+0 records in 00:06:34.225 1+0 records out 00:06:34.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024429 s, 16.8 MB/s 00:06:34.225 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.485 { 00:06:34.485 "nbd_device": "/dev/nbd0", 00:06:34.485 "bdev_name": "raid" 00:06:34.485 } 00:06:34.485 ]' 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.485 { 00:06:34.485 "nbd_device": "/dev/nbd0", 00:06:34.485 "bdev_name": "raid" 00:06:34.485 } 00:06:34.485 ]' 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:34.485 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:34.746 4096+0 records in 00:06:34.746 4096+0 records out 00:06:34.746 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0308705 s, 67.9 MB/s 00:06:34.746 01:50:39 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:34.746 4096+0 records in 00:06:34.746 4096+0 records out 00:06:34.746 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.173456 s, 12.1 MB/s 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:34.746 128+0 records in 00:06:34.746 128+0 records out 00:06:34.746 65536 bytes (66 kB, 64 KiB) copied, 0.00114744 s, 57.1 MB/s 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:34.746 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:35.007 2035+0 records in 00:06:35.007 2035+0 records out 00:06:35.007 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0131283 s, 79.4 MB/s 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:35.007 456+0 records in 00:06:35.007 456+0 records out 00:06:35.007 233472 bytes (233 kB, 228 KiB) copied, 0.00385601 s, 60.5 MB/s 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.007 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:35.268 [2024-12-07 01:50:40.491363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.268 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71534 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 71534 ']' 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 71534 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71534 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.529 killing process with pid 71534 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71534' 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 71534 00:06:35.529 [2024-12-07 01:50:40.808395] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.529 [2024-12-07 01:50:40.808520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.529 01:50:40 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 71534 00:06:35.529 [2024-12-07 01:50:40.808574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:35.529 [2024-12-07 01:50:40.808586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:35.529 [2024-12-07 01:50:40.831782] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:35.790 01:50:41 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:35.790 00:06:35.790 real 0m2.627s 00:06:35.790 user 0m3.239s 00:06:35.790 sys 0m0.878s 00:06:35.790 01:50:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.790 01:50:41 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:35.790 ************************************ 00:06:35.790 END TEST raid_function_test_raid0 00:06:35.790 ************************************ 00:06:35.790 01:50:41 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:35.790 01:50:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:35.790 01:50:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.790 01:50:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:35.790 ************************************ 00:06:35.790 START TEST raid_function_test_concat 00:06:35.790 ************************************ 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71647 00:06:35.790 Process raid pid: 71647 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71647' 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71647 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 71647 ']' 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.790 01:50:41 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:35.790 [2024-12-07 01:50:41.214330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:35.790 [2024-12-07 01:50:41.214448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.051 [2024-12-07 01:50:41.360353] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.051 [2024-12-07 01:50:41.405784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.051 [2024-12-07 01:50:41.447211] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.051 [2024-12-07 01:50:41.447247] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.621 Base_1 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.621 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.881 Base_2 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.882 [2024-12-07 01:50:42.106193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:36.882 [2024-12-07 01:50:42.108117] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:36.882 [2024-12-07 01:50:42.108185] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:36.882 [2024-12-07 01:50:42.108197] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:36.882 [2024-12-07 01:50:42.108453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:36.882 [2024-12-07 01:50:42.108587] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:36.882 [2024-12-07 01:50:42.108604] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:36.882 [2024-12-07 01:50:42.108767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:36.882 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:36.882 [2024-12-07 01:50:42.337852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:37.142 /dev/nbd0 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.142 1+0 records in 00:06:37.142 1+0 records out 00:06:37.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331024 s, 12.4 MB/s 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.142 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.402 { 00:06:37.402 "nbd_device": "/dev/nbd0", 00:06:37.402 "bdev_name": "raid" 00:06:37.402 } 00:06:37.402 ]' 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.402 { 00:06:37.402 "nbd_device": "/dev/nbd0", 00:06:37.402 "bdev_name": "raid" 00:06:37.402 } 00:06:37.402 ]' 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:37.402 4096+0 records in 00:06:37.402 4096+0 records out 00:06:37.402 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0332391 s, 63.1 MB/s 00:06:37.402 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:37.662 4096+0 records in 00:06:37.662 4096+0 records out 00:06:37.662 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.179517 s, 11.7 MB/s 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:37.662 128+0 records in 00:06:37.662 128+0 records out 00:06:37.662 65536 bytes (66 kB, 64 KiB) copied, 0.000460169 s, 142 MB/s 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:37.662 2035+0 records in 00:06:37.662 2035+0 records out 00:06:37.662 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0128357 s, 81.2 MB/s 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:37.662 456+0 records in 00:06:37.662 456+0 records out 00:06:37.662 233472 bytes (233 kB, 228 KiB) copied, 0.00354537 s, 65.9 MB/s 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.662 01:50:42 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.922 [2024-12-07 01:50:43.192657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:37.922 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71647 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 71647 ']' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 71647 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71647 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.181 killing process with pid 71647 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71647' 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 71647 00:06:38.181 [2024-12-07 01:50:43.504390] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.181 [2024-12-07 01:50:43.504519] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.181 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 71647 00:06:38.181 [2024-12-07 01:50:43.504582] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.181 [2024-12-07 01:50:43.504594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:38.181 [2024-12-07 01:50:43.528037] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:38.442 01:50:43 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:38.442 00:06:38.442 real 0m2.632s 00:06:38.442 user 0m3.254s 00:06:38.442 sys 0m0.887s 00:06:38.442 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.442 01:50:43 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:38.442 ************************************ 00:06:38.442 END TEST raid_function_test_concat 00:06:38.442 ************************************ 00:06:38.442 01:50:43 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:38.442 01:50:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.442 01:50:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.442 01:50:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:38.442 ************************************ 00:06:38.442 START TEST raid0_resize_test 00:06:38.442 ************************************ 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71758 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:38.442 Process raid pid: 71758 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71758' 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71758 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71758 ']' 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.442 01:50:43 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.701 [2024-12-07 01:50:43.922475] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:38.701 [2024-12-07 01:50:43.922604] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.701 [2024-12-07 01:50:44.069081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.701 [2024-12-07 01:50:44.114186] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.701 [2024-12-07 01:50:44.155162] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:38.701 [2024-12-07 01:50:44.155198] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.638 Base_1 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.638 Base_2 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.638 [2024-12-07 01:50:44.767802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:39.638 [2024-12-07 01:50:44.769578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:39.638 [2024-12-07 01:50:44.769646] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:39.638 [2024-12-07 01:50:44.769663] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:39.638 [2024-12-07 01:50:44.769924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:39.638 [2024-12-07 01:50:44.770048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:39.638 [2024-12-07 01:50:44.770067] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:39.638 [2024-12-07 01:50:44.770172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.638 [2024-12-07 01:50:44.775765] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:39.638 [2024-12-07 01:50:44.775790] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:39.638 true 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.638 [2024-12-07 01:50:44.787894] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.638 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.638 [2024-12-07 01:50:44.835628] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:39.638 [2024-12-07 01:50:44.835652] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:39.639 [2024-12-07 01:50:44.835696] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:39.639 true 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:39.639 [2024-12-07 01:50:44.847773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71758 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71758 ']' 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 71758 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71758 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.639 killing process with pid 71758 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71758' 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 71758 00:06:39.639 [2024-12-07 01:50:44.933711] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:39.639 [2024-12-07 01:50:44.933785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.639 [2024-12-07 01:50:44.933834] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:39.639 [2024-12-07 01:50:44.933844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:39.639 01:50:44 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 71758 00:06:39.639 [2024-12-07 01:50:44.935302] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.898 01:50:45 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:39.898 00:06:39.898 real 0m1.333s 00:06:39.898 user 0m1.502s 00:06:39.898 sys 0m0.291s 00:06:39.898 01:50:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.898 01:50:45 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.898 ************************************ 00:06:39.898 END TEST raid0_resize_test 00:06:39.898 ************************************ 00:06:39.898 01:50:45 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:39.898 01:50:45 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:39.898 01:50:45 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.898 01:50:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.898 ************************************ 00:06:39.898 START TEST raid1_resize_test 00:06:39.898 ************************************ 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71809 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:39.898 Process raid pid: 71809 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71809' 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71809 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 71809 ']' 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.898 01:50:45 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.898 [2024-12-07 01:50:45.322690] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.898 [2024-12-07 01:50:45.322816] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.157 [2024-12-07 01:50:45.468206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.157 [2024-12-07 01:50:45.514154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.157 [2024-12-07 01:50:45.555139] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.157 [2024-12-07 01:50:45.555178] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.723 Base_1 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.723 Base_2 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.723 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.724 [2024-12-07 01:50:46.163922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:40.724 [2024-12-07 01:50:46.165669] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:40.724 [2024-12-07 01:50:46.165751] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:40.724 [2024-12-07 01:50:46.165762] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:40.724 [2024-12-07 01:50:46.166010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:40.724 [2024-12-07 01:50:46.166111] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:40.724 [2024-12-07 01:50:46.166120] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:40.724 [2024-12-07 01:50:46.166226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.724 [2024-12-07 01:50:46.171867] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.724 [2024-12-07 01:50:46.171892] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:40.724 true 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:40.724 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.984 [2024-12-07 01:50:46.184048] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.984 [2024-12-07 01:50:46.227782] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:40.984 [2024-12-07 01:50:46.227805] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:40.984 [2024-12-07 01:50:46.227826] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:40.984 true 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.984 [2024-12-07 01:50:46.243903] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71809 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 71809 ']' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 71809 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71809 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.984 killing process with pid 71809 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71809' 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 71809 00:06:40.984 [2024-12-07 01:50:46.305754] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.984 [2024-12-07 01:50:46.305827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.984 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 71809 00:06:40.984 [2024-12-07 01:50:46.306207] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.984 [2024-12-07 01:50:46.306230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:40.984 [2024-12-07 01:50:46.307362] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.244 01:50:46 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:41.244 00:06:41.244 real 0m1.299s 00:06:41.244 user 0m1.463s 00:06:41.244 sys 0m0.280s 00:06:41.244 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.244 01:50:46 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.244 ************************************ 00:06:41.244 END TEST raid1_resize_test 00:06:41.244 ************************************ 00:06:41.244 01:50:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:41.244 01:50:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:41.244 01:50:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:41.244 01:50:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:41.244 01:50:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.244 01:50:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.244 ************************************ 00:06:41.244 START TEST raid_state_function_test 00:06:41.244 ************************************ 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:41.244 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71855 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.245 Process raid pid: 71855 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71855' 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71855 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71855 ']' 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.245 01:50:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.504 [2024-12-07 01:50:46.708242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:41.504 [2024-12-07 01:50:46.708378] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.504 [2024-12-07 01:50:46.835207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.504 [2024-12-07 01:50:46.878591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.504 [2024-12-07 01:50:46.919722] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.504 [2024-12-07 01:50:46.919766] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.073 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.073 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:42.073 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:42.073 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.073 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.332 [2024-12-07 01:50:47.536514] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:42.332 [2024-12-07 01:50:47.536563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:42.332 [2024-12-07 01:50:47.536574] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:42.332 [2024-12-07 01:50:47.536584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.332 "name": "Existed_Raid", 00:06:42.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.332 "strip_size_kb": 64, 00:06:42.332 "state": "configuring", 00:06:42.332 "raid_level": "raid0", 00:06:42.332 "superblock": false, 00:06:42.332 "num_base_bdevs": 2, 00:06:42.332 "num_base_bdevs_discovered": 0, 00:06:42.332 "num_base_bdevs_operational": 2, 00:06:42.332 "base_bdevs_list": [ 00:06:42.332 { 00:06:42.332 "name": "BaseBdev1", 00:06:42.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.332 "is_configured": false, 00:06:42.332 "data_offset": 0, 00:06:42.332 "data_size": 0 00:06:42.332 }, 00:06:42.332 { 00:06:42.332 "name": "BaseBdev2", 00:06:42.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.332 "is_configured": false, 00:06:42.332 "data_offset": 0, 00:06:42.332 "data_size": 0 00:06:42.332 } 00:06:42.332 ] 00:06:42.332 }' 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.332 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 [2024-12-07 01:50:47.963690] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:42.592 [2024-12-07 01:50:47.963733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 [2024-12-07 01:50:47.971673] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:42.592 [2024-12-07 01:50:47.971717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:42.592 [2024-12-07 01:50:47.971750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:42.592 [2024-12-07 01:50:47.971760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 [2024-12-07 01:50:47.988452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:42.592 BaseBdev1 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.592 01:50:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.592 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:42.592 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.592 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.592 [ 00:06:42.592 { 00:06:42.592 "name": "BaseBdev1", 00:06:42.592 "aliases": [ 00:06:42.592 "55158f73-d2d8-48a1-907a-be76cb48fb0c" 00:06:42.592 ], 00:06:42.592 "product_name": "Malloc disk", 00:06:42.592 "block_size": 512, 00:06:42.592 "num_blocks": 65536, 00:06:42.592 "uuid": "55158f73-d2d8-48a1-907a-be76cb48fb0c", 00:06:42.592 "assigned_rate_limits": { 00:06:42.592 "rw_ios_per_sec": 0, 00:06:42.592 "rw_mbytes_per_sec": 0, 00:06:42.592 "r_mbytes_per_sec": 0, 00:06:42.592 "w_mbytes_per_sec": 0 00:06:42.592 }, 00:06:42.592 "claimed": true, 00:06:42.592 "claim_type": "exclusive_write", 00:06:42.592 "zoned": false, 00:06:42.592 "supported_io_types": { 00:06:42.592 "read": true, 00:06:42.592 "write": true, 00:06:42.592 "unmap": true, 00:06:42.592 "flush": true, 00:06:42.592 "reset": true, 00:06:42.592 "nvme_admin": false, 00:06:42.592 "nvme_io": false, 00:06:42.592 "nvme_io_md": false, 00:06:42.592 "write_zeroes": true, 00:06:42.592 "zcopy": true, 00:06:42.592 "get_zone_info": false, 00:06:42.592 "zone_management": false, 00:06:42.592 "zone_append": false, 00:06:42.592 "compare": false, 00:06:42.592 "compare_and_write": false, 00:06:42.592 "abort": true, 00:06:42.592 "seek_hole": false, 00:06:42.592 "seek_data": false, 00:06:42.592 "copy": true, 00:06:42.592 "nvme_iov_md": false 00:06:42.592 }, 00:06:42.592 "memory_domains": [ 00:06:42.592 { 00:06:42.592 "dma_device_id": "system", 00:06:42.592 "dma_device_type": 1 00:06:42.592 }, 00:06:42.592 { 00:06:42.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.592 "dma_device_type": 2 00:06:42.592 } 00:06:42.592 ], 00:06:42.592 "driver_specific": {} 00:06:42.592 } 00:06:42.592 ] 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.593 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.853 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.853 "name": "Existed_Raid", 00:06:42.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.853 "strip_size_kb": 64, 00:06:42.853 "state": "configuring", 00:06:42.853 "raid_level": "raid0", 00:06:42.853 "superblock": false, 00:06:42.853 "num_base_bdevs": 2, 00:06:42.853 "num_base_bdevs_discovered": 1, 00:06:42.853 "num_base_bdevs_operational": 2, 00:06:42.853 "base_bdevs_list": [ 00:06:42.853 { 00:06:42.853 "name": "BaseBdev1", 00:06:42.853 "uuid": "55158f73-d2d8-48a1-907a-be76cb48fb0c", 00:06:42.853 "is_configured": true, 00:06:42.853 "data_offset": 0, 00:06:42.853 "data_size": 65536 00:06:42.853 }, 00:06:42.853 { 00:06:42.853 "name": "BaseBdev2", 00:06:42.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:42.853 "is_configured": false, 00:06:42.853 "data_offset": 0, 00:06:42.853 "data_size": 0 00:06:42.853 } 00:06:42.853 ] 00:06:42.853 }' 00:06:42.853 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.853 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.112 [2024-12-07 01:50:48.459689] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:43.112 [2024-12-07 01:50:48.459738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.112 [2024-12-07 01:50:48.471717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:43.112 [2024-12-07 01:50:48.473598] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:43.112 [2024-12-07 01:50:48.473631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:43.112 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.113 "name": "Existed_Raid", 00:06:43.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.113 "strip_size_kb": 64, 00:06:43.113 "state": "configuring", 00:06:43.113 "raid_level": "raid0", 00:06:43.113 "superblock": false, 00:06:43.113 "num_base_bdevs": 2, 00:06:43.113 "num_base_bdevs_discovered": 1, 00:06:43.113 "num_base_bdevs_operational": 2, 00:06:43.113 "base_bdevs_list": [ 00:06:43.113 { 00:06:43.113 "name": "BaseBdev1", 00:06:43.113 "uuid": "55158f73-d2d8-48a1-907a-be76cb48fb0c", 00:06:43.113 "is_configured": true, 00:06:43.113 "data_offset": 0, 00:06:43.113 "data_size": 65536 00:06:43.113 }, 00:06:43.113 { 00:06:43.113 "name": "BaseBdev2", 00:06:43.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:43.113 "is_configured": false, 00:06:43.113 "data_offset": 0, 00:06:43.113 "data_size": 0 00:06:43.113 } 00:06:43.113 ] 00:06:43.113 }' 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.113 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.683 [2024-12-07 01:50:48.908240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:43.683 [2024-12-07 01:50:48.908297] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:43.683 [2024-12-07 01:50:48.908308] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:43.683 [2024-12-07 01:50:48.908639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:43.683 [2024-12-07 01:50:48.908845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:43.683 [2024-12-07 01:50:48.908872] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:43.683 [2024-12-07 01:50:48.909097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.683 BaseBdev2 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.683 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.683 [ 00:06:43.683 { 00:06:43.683 "name": "BaseBdev2", 00:06:43.683 "aliases": [ 00:06:43.684 "2f2baae1-8c33-422d-87b5-d82fc8036e53" 00:06:43.684 ], 00:06:43.684 "product_name": "Malloc disk", 00:06:43.684 "block_size": 512, 00:06:43.684 "num_blocks": 65536, 00:06:43.684 "uuid": "2f2baae1-8c33-422d-87b5-d82fc8036e53", 00:06:43.684 "assigned_rate_limits": { 00:06:43.684 "rw_ios_per_sec": 0, 00:06:43.684 "rw_mbytes_per_sec": 0, 00:06:43.684 "r_mbytes_per_sec": 0, 00:06:43.684 "w_mbytes_per_sec": 0 00:06:43.684 }, 00:06:43.684 "claimed": true, 00:06:43.684 "claim_type": "exclusive_write", 00:06:43.684 "zoned": false, 00:06:43.684 "supported_io_types": { 00:06:43.684 "read": true, 00:06:43.684 "write": true, 00:06:43.684 "unmap": true, 00:06:43.684 "flush": true, 00:06:43.684 "reset": true, 00:06:43.684 "nvme_admin": false, 00:06:43.684 "nvme_io": false, 00:06:43.684 "nvme_io_md": false, 00:06:43.684 "write_zeroes": true, 00:06:43.684 "zcopy": true, 00:06:43.684 "get_zone_info": false, 00:06:43.684 "zone_management": false, 00:06:43.684 "zone_append": false, 00:06:43.684 "compare": false, 00:06:43.684 "compare_and_write": false, 00:06:43.684 "abort": true, 00:06:43.684 "seek_hole": false, 00:06:43.684 "seek_data": false, 00:06:43.684 "copy": true, 00:06:43.684 "nvme_iov_md": false 00:06:43.684 }, 00:06:43.684 "memory_domains": [ 00:06:43.684 { 00:06:43.684 "dma_device_id": "system", 00:06:43.684 "dma_device_type": 1 00:06:43.684 }, 00:06:43.684 { 00:06:43.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.684 "dma_device_type": 2 00:06:43.684 } 00:06:43.684 ], 00:06:43.684 "driver_specific": {} 00:06:43.684 } 00:06:43.684 ] 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:43.684 "name": "Existed_Raid", 00:06:43.684 "uuid": "4fc0beea-12c0-4620-acfa-53a2cde0997c", 00:06:43.684 "strip_size_kb": 64, 00:06:43.684 "state": "online", 00:06:43.684 "raid_level": "raid0", 00:06:43.684 "superblock": false, 00:06:43.684 "num_base_bdevs": 2, 00:06:43.684 "num_base_bdevs_discovered": 2, 00:06:43.684 "num_base_bdevs_operational": 2, 00:06:43.684 "base_bdevs_list": [ 00:06:43.684 { 00:06:43.684 "name": "BaseBdev1", 00:06:43.684 "uuid": "55158f73-d2d8-48a1-907a-be76cb48fb0c", 00:06:43.684 "is_configured": true, 00:06:43.684 "data_offset": 0, 00:06:43.684 "data_size": 65536 00:06:43.684 }, 00:06:43.684 { 00:06:43.684 "name": "BaseBdev2", 00:06:43.684 "uuid": "2f2baae1-8c33-422d-87b5-d82fc8036e53", 00:06:43.684 "is_configured": true, 00:06:43.684 "data_offset": 0, 00:06:43.684 "data_size": 65536 00:06:43.684 } 00:06:43.684 ] 00:06:43.684 }' 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:43.684 01:50:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:43.944 [2024-12-07 01:50:49.363832] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.944 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:44.205 "name": "Existed_Raid", 00:06:44.205 "aliases": [ 00:06:44.205 "4fc0beea-12c0-4620-acfa-53a2cde0997c" 00:06:44.205 ], 00:06:44.205 "product_name": "Raid Volume", 00:06:44.205 "block_size": 512, 00:06:44.205 "num_blocks": 131072, 00:06:44.205 "uuid": "4fc0beea-12c0-4620-acfa-53a2cde0997c", 00:06:44.205 "assigned_rate_limits": { 00:06:44.205 "rw_ios_per_sec": 0, 00:06:44.205 "rw_mbytes_per_sec": 0, 00:06:44.205 "r_mbytes_per_sec": 0, 00:06:44.205 "w_mbytes_per_sec": 0 00:06:44.205 }, 00:06:44.205 "claimed": false, 00:06:44.205 "zoned": false, 00:06:44.205 "supported_io_types": { 00:06:44.205 "read": true, 00:06:44.205 "write": true, 00:06:44.205 "unmap": true, 00:06:44.205 "flush": true, 00:06:44.205 "reset": true, 00:06:44.205 "nvme_admin": false, 00:06:44.205 "nvme_io": false, 00:06:44.205 "nvme_io_md": false, 00:06:44.205 "write_zeroes": true, 00:06:44.205 "zcopy": false, 00:06:44.205 "get_zone_info": false, 00:06:44.205 "zone_management": false, 00:06:44.205 "zone_append": false, 00:06:44.205 "compare": false, 00:06:44.205 "compare_and_write": false, 00:06:44.205 "abort": false, 00:06:44.205 "seek_hole": false, 00:06:44.205 "seek_data": false, 00:06:44.205 "copy": false, 00:06:44.205 "nvme_iov_md": false 00:06:44.205 }, 00:06:44.205 "memory_domains": [ 00:06:44.205 { 00:06:44.205 "dma_device_id": "system", 00:06:44.205 "dma_device_type": 1 00:06:44.205 }, 00:06:44.205 { 00:06:44.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.205 "dma_device_type": 2 00:06:44.205 }, 00:06:44.205 { 00:06:44.205 "dma_device_id": "system", 00:06:44.205 "dma_device_type": 1 00:06:44.205 }, 00:06:44.205 { 00:06:44.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:44.205 "dma_device_type": 2 00:06:44.205 } 00:06:44.205 ], 00:06:44.205 "driver_specific": { 00:06:44.205 "raid": { 00:06:44.205 "uuid": "4fc0beea-12c0-4620-acfa-53a2cde0997c", 00:06:44.205 "strip_size_kb": 64, 00:06:44.205 "state": "online", 00:06:44.205 "raid_level": "raid0", 00:06:44.205 "superblock": false, 00:06:44.205 "num_base_bdevs": 2, 00:06:44.205 "num_base_bdevs_discovered": 2, 00:06:44.205 "num_base_bdevs_operational": 2, 00:06:44.205 "base_bdevs_list": [ 00:06:44.205 { 00:06:44.205 "name": "BaseBdev1", 00:06:44.205 "uuid": "55158f73-d2d8-48a1-907a-be76cb48fb0c", 00:06:44.205 "is_configured": true, 00:06:44.205 "data_offset": 0, 00:06:44.205 "data_size": 65536 00:06:44.205 }, 00:06:44.205 { 00:06:44.205 "name": "BaseBdev2", 00:06:44.205 "uuid": "2f2baae1-8c33-422d-87b5-d82fc8036e53", 00:06:44.205 "is_configured": true, 00:06:44.205 "data_offset": 0, 00:06:44.205 "data_size": 65536 00:06:44.205 } 00:06:44.205 ] 00:06:44.205 } 00:06:44.205 } 00:06:44.205 }' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:44.205 BaseBdev2' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.205 [2024-12-07 01:50:49.559261] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:44.205 [2024-12-07 01:50:49.559291] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:44.205 [2024-12-07 01:50:49.559346] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.205 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.206 "name": "Existed_Raid", 00:06:44.206 "uuid": "4fc0beea-12c0-4620-acfa-53a2cde0997c", 00:06:44.206 "strip_size_kb": 64, 00:06:44.206 "state": "offline", 00:06:44.206 "raid_level": "raid0", 00:06:44.206 "superblock": false, 00:06:44.206 "num_base_bdevs": 2, 00:06:44.206 "num_base_bdevs_discovered": 1, 00:06:44.206 "num_base_bdevs_operational": 1, 00:06:44.206 "base_bdevs_list": [ 00:06:44.206 { 00:06:44.206 "name": null, 00:06:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:44.206 "is_configured": false, 00:06:44.206 "data_offset": 0, 00:06:44.206 "data_size": 65536 00:06:44.206 }, 00:06:44.206 { 00:06:44.206 "name": "BaseBdev2", 00:06:44.206 "uuid": "2f2baae1-8c33-422d-87b5-d82fc8036e53", 00:06:44.206 "is_configured": true, 00:06:44.206 "data_offset": 0, 00:06:44.206 "data_size": 65536 00:06:44.206 } 00:06:44.206 ] 00:06:44.206 }' 00:06:44.206 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.206 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:44.775 01:50:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.775 [2024-12-07 01:50:50.029736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:44.775 [2024-12-07 01:50:50.029797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71855 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71855 ']' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71855 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71855 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.775 killing process with pid 71855 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71855' 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71855 00:06:44.775 [2024-12-07 01:50:50.136895] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:44.775 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71855 00:06:44.775 [2024-12-07 01:50:50.137868] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.037 01:50:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:45.037 00:06:45.037 real 0m3.757s 00:06:45.037 user 0m5.922s 00:06:45.037 sys 0m0.725s 00:06:45.037 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.037 01:50:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.037 ************************************ 00:06:45.037 END TEST raid_state_function_test 00:06:45.037 ************************************ 00:06:45.037 01:50:50 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:45.037 01:50:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:45.037 01:50:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.038 01:50:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.038 ************************************ 00:06:45.038 START TEST raid_state_function_test_sb 00:06:45.038 ************************************ 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72097 00:06:45.038 Process raid pid: 72097 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72097' 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72097 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72097 ']' 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.038 01:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:45.298 [2024-12-07 01:50:50.532845] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:45.298 [2024-12-07 01:50:50.532946] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.298 [2024-12-07 01:50:50.661014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.298 [2024-12-07 01:50:50.703342] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.298 [2024-12-07 01:50:50.744089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:45.298 [2024-12-07 01:50:50.744128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.239 [2024-12-07 01:50:51.356935] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:46.239 [2024-12-07 01:50:51.356978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:46.239 [2024-12-07 01:50:51.356989] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:46.239 [2024-12-07 01:50:51.356999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.239 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.239 "name": "Existed_Raid", 00:06:46.239 "uuid": "3ef38a81-b9e6-42be-9abe-a0e279070749", 00:06:46.239 "strip_size_kb": 64, 00:06:46.239 "state": "configuring", 00:06:46.239 "raid_level": "raid0", 00:06:46.239 "superblock": true, 00:06:46.239 "num_base_bdevs": 2, 00:06:46.239 "num_base_bdevs_discovered": 0, 00:06:46.239 "num_base_bdevs_operational": 2, 00:06:46.239 "base_bdevs_list": [ 00:06:46.239 { 00:06:46.240 "name": "BaseBdev1", 00:06:46.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.240 "is_configured": false, 00:06:46.240 "data_offset": 0, 00:06:46.240 "data_size": 0 00:06:46.240 }, 00:06:46.240 { 00:06:46.240 "name": "BaseBdev2", 00:06:46.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.240 "is_configured": false, 00:06:46.240 "data_offset": 0, 00:06:46.240 "data_size": 0 00:06:46.240 } 00:06:46.240 ] 00:06:46.240 }' 00:06:46.240 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.240 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 [2024-12-07 01:50:51.736183] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:46.508 [2024-12-07 01:50:51.736232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 [2024-12-07 01:50:51.748170] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:46.508 [2024-12-07 01:50:51.748205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:46.508 [2024-12-07 01:50:51.748222] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:46.508 [2024-12-07 01:50:51.748231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 [2024-12-07 01:50:51.768803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:46.508 BaseBdev1 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 [ 00:06:46.508 { 00:06:46.508 "name": "BaseBdev1", 00:06:46.508 "aliases": [ 00:06:46.508 "3749b8e5-84f7-4ce9-9ebc-7b2c6a401325" 00:06:46.508 ], 00:06:46.508 "product_name": "Malloc disk", 00:06:46.508 "block_size": 512, 00:06:46.508 "num_blocks": 65536, 00:06:46.508 "uuid": "3749b8e5-84f7-4ce9-9ebc-7b2c6a401325", 00:06:46.508 "assigned_rate_limits": { 00:06:46.508 "rw_ios_per_sec": 0, 00:06:46.508 "rw_mbytes_per_sec": 0, 00:06:46.508 "r_mbytes_per_sec": 0, 00:06:46.508 "w_mbytes_per_sec": 0 00:06:46.508 }, 00:06:46.508 "claimed": true, 00:06:46.508 "claim_type": "exclusive_write", 00:06:46.508 "zoned": false, 00:06:46.508 "supported_io_types": { 00:06:46.508 "read": true, 00:06:46.508 "write": true, 00:06:46.508 "unmap": true, 00:06:46.508 "flush": true, 00:06:46.508 "reset": true, 00:06:46.508 "nvme_admin": false, 00:06:46.508 "nvme_io": false, 00:06:46.508 "nvme_io_md": false, 00:06:46.508 "write_zeroes": true, 00:06:46.508 "zcopy": true, 00:06:46.508 "get_zone_info": false, 00:06:46.508 "zone_management": false, 00:06:46.508 "zone_append": false, 00:06:46.508 "compare": false, 00:06:46.508 "compare_and_write": false, 00:06:46.508 "abort": true, 00:06:46.508 "seek_hole": false, 00:06:46.508 "seek_data": false, 00:06:46.508 "copy": true, 00:06:46.508 "nvme_iov_md": false 00:06:46.508 }, 00:06:46.508 "memory_domains": [ 00:06:46.508 { 00:06:46.508 "dma_device_id": "system", 00:06:46.508 "dma_device_type": 1 00:06:46.508 }, 00:06:46.508 { 00:06:46.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.508 "dma_device_type": 2 00:06:46.508 } 00:06:46.508 ], 00:06:46.508 "driver_specific": {} 00:06:46.508 } 00:06:46.508 ] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.508 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:46.508 "name": "Existed_Raid", 00:06:46.508 "uuid": "f5655743-92ce-4342-b67e-1b6a4f29529a", 00:06:46.508 "strip_size_kb": 64, 00:06:46.508 "state": "configuring", 00:06:46.508 "raid_level": "raid0", 00:06:46.508 "superblock": true, 00:06:46.508 "num_base_bdevs": 2, 00:06:46.508 "num_base_bdevs_discovered": 1, 00:06:46.508 "num_base_bdevs_operational": 2, 00:06:46.508 "base_bdevs_list": [ 00:06:46.508 { 00:06:46.508 "name": "BaseBdev1", 00:06:46.509 "uuid": "3749b8e5-84f7-4ce9-9ebc-7b2c6a401325", 00:06:46.509 "is_configured": true, 00:06:46.509 "data_offset": 2048, 00:06:46.509 "data_size": 63488 00:06:46.509 }, 00:06:46.509 { 00:06:46.509 "name": "BaseBdev2", 00:06:46.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:46.509 "is_configured": false, 00:06:46.509 "data_offset": 0, 00:06:46.509 "data_size": 0 00:06:46.509 } 00:06:46.509 ] 00:06:46.509 }' 00:06:46.509 01:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:46.509 01:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.768 [2024-12-07 01:50:52.212113] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:46.768 [2024-12-07 01:50:52.212168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.768 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:46.768 [2024-12-07 01:50:52.224140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:46.768 [2024-12-07 01:50:52.226021] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:46.768 [2024-12-07 01:50:52.226055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.028 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.028 "name": "Existed_Raid", 00:06:47.028 "uuid": "3fc6b04c-b791-43eb-81bc-1a68b87c4b7c", 00:06:47.028 "strip_size_kb": 64, 00:06:47.028 "state": "configuring", 00:06:47.028 "raid_level": "raid0", 00:06:47.028 "superblock": true, 00:06:47.028 "num_base_bdevs": 2, 00:06:47.028 "num_base_bdevs_discovered": 1, 00:06:47.028 "num_base_bdevs_operational": 2, 00:06:47.028 "base_bdevs_list": [ 00:06:47.028 { 00:06:47.028 "name": "BaseBdev1", 00:06:47.028 "uuid": "3749b8e5-84f7-4ce9-9ebc-7b2c6a401325", 00:06:47.028 "is_configured": true, 00:06:47.029 "data_offset": 2048, 00:06:47.029 "data_size": 63488 00:06:47.029 }, 00:06:47.029 { 00:06:47.029 "name": "BaseBdev2", 00:06:47.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:47.029 "is_configured": false, 00:06:47.029 "data_offset": 0, 00:06:47.029 "data_size": 0 00:06:47.029 } 00:06:47.029 ] 00:06:47.029 }' 00:06:47.029 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.029 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.288 [2024-12-07 01:50:52.644390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.288 [2024-12-07 01:50:52.644997] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:47.288 [2024-12-07 01:50:52.645090] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:47.288 BaseBdev2 00:06:47.288 [2024-12-07 01:50:52.646000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:47.288 [2024-12-07 01:50:52.646459] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.288 [2024-12-07 01:50:52.646550] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:47.288 [2024-12-07 01:50:52.647013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:47.288 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.289 [ 00:06:47.289 { 00:06:47.289 "name": "BaseBdev2", 00:06:47.289 "aliases": [ 00:06:47.289 "84b539af-0538-4cce-87ea-9349745215ee" 00:06:47.289 ], 00:06:47.289 "product_name": "Malloc disk", 00:06:47.289 "block_size": 512, 00:06:47.289 "num_blocks": 65536, 00:06:47.289 "uuid": "84b539af-0538-4cce-87ea-9349745215ee", 00:06:47.289 "assigned_rate_limits": { 00:06:47.289 "rw_ios_per_sec": 0, 00:06:47.289 "rw_mbytes_per_sec": 0, 00:06:47.289 "r_mbytes_per_sec": 0, 00:06:47.289 "w_mbytes_per_sec": 0 00:06:47.289 }, 00:06:47.289 "claimed": true, 00:06:47.289 "claim_type": "exclusive_write", 00:06:47.289 "zoned": false, 00:06:47.289 "supported_io_types": { 00:06:47.289 "read": true, 00:06:47.289 "write": true, 00:06:47.289 "unmap": true, 00:06:47.289 "flush": true, 00:06:47.289 "reset": true, 00:06:47.289 "nvme_admin": false, 00:06:47.289 "nvme_io": false, 00:06:47.289 "nvme_io_md": false, 00:06:47.289 "write_zeroes": true, 00:06:47.289 "zcopy": true, 00:06:47.289 "get_zone_info": false, 00:06:47.289 "zone_management": false, 00:06:47.289 "zone_append": false, 00:06:47.289 "compare": false, 00:06:47.289 "compare_and_write": false, 00:06:47.289 "abort": true, 00:06:47.289 "seek_hole": false, 00:06:47.289 "seek_data": false, 00:06:47.289 "copy": true, 00:06:47.289 "nvme_iov_md": false 00:06:47.289 }, 00:06:47.289 "memory_domains": [ 00:06:47.289 { 00:06:47.289 "dma_device_id": "system", 00:06:47.289 "dma_device_type": 1 00:06:47.289 }, 00:06:47.289 { 00:06:47.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.289 "dma_device_type": 2 00:06:47.289 } 00:06:47.289 ], 00:06:47.289 "driver_specific": {} 00:06:47.289 } 00:06:47.289 ] 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.289 "name": "Existed_Raid", 00:06:47.289 "uuid": "3fc6b04c-b791-43eb-81bc-1a68b87c4b7c", 00:06:47.289 "strip_size_kb": 64, 00:06:47.289 "state": "online", 00:06:47.289 "raid_level": "raid0", 00:06:47.289 "superblock": true, 00:06:47.289 "num_base_bdevs": 2, 00:06:47.289 "num_base_bdevs_discovered": 2, 00:06:47.289 "num_base_bdevs_operational": 2, 00:06:47.289 "base_bdevs_list": [ 00:06:47.289 { 00:06:47.289 "name": "BaseBdev1", 00:06:47.289 "uuid": "3749b8e5-84f7-4ce9-9ebc-7b2c6a401325", 00:06:47.289 "is_configured": true, 00:06:47.289 "data_offset": 2048, 00:06:47.289 "data_size": 63488 00:06:47.289 }, 00:06:47.289 { 00:06:47.289 "name": "BaseBdev2", 00:06:47.289 "uuid": "84b539af-0538-4cce-87ea-9349745215ee", 00:06:47.289 "is_configured": true, 00:06:47.289 "data_offset": 2048, 00:06:47.289 "data_size": 63488 00:06:47.289 } 00:06:47.289 ] 00:06:47.289 }' 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.289 01:50:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.858 [2024-12-07 01:50:53.075956] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:47.858 "name": "Existed_Raid", 00:06:47.858 "aliases": [ 00:06:47.858 "3fc6b04c-b791-43eb-81bc-1a68b87c4b7c" 00:06:47.858 ], 00:06:47.858 "product_name": "Raid Volume", 00:06:47.858 "block_size": 512, 00:06:47.858 "num_blocks": 126976, 00:06:47.858 "uuid": "3fc6b04c-b791-43eb-81bc-1a68b87c4b7c", 00:06:47.858 "assigned_rate_limits": { 00:06:47.858 "rw_ios_per_sec": 0, 00:06:47.858 "rw_mbytes_per_sec": 0, 00:06:47.858 "r_mbytes_per_sec": 0, 00:06:47.858 "w_mbytes_per_sec": 0 00:06:47.858 }, 00:06:47.858 "claimed": false, 00:06:47.858 "zoned": false, 00:06:47.858 "supported_io_types": { 00:06:47.858 "read": true, 00:06:47.858 "write": true, 00:06:47.858 "unmap": true, 00:06:47.858 "flush": true, 00:06:47.858 "reset": true, 00:06:47.858 "nvme_admin": false, 00:06:47.858 "nvme_io": false, 00:06:47.858 "nvme_io_md": false, 00:06:47.858 "write_zeroes": true, 00:06:47.858 "zcopy": false, 00:06:47.858 "get_zone_info": false, 00:06:47.858 "zone_management": false, 00:06:47.858 "zone_append": false, 00:06:47.858 "compare": false, 00:06:47.858 "compare_and_write": false, 00:06:47.858 "abort": false, 00:06:47.858 "seek_hole": false, 00:06:47.858 "seek_data": false, 00:06:47.858 "copy": false, 00:06:47.858 "nvme_iov_md": false 00:06:47.858 }, 00:06:47.858 "memory_domains": [ 00:06:47.858 { 00:06:47.858 "dma_device_id": "system", 00:06:47.858 "dma_device_type": 1 00:06:47.858 }, 00:06:47.858 { 00:06:47.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.858 "dma_device_type": 2 00:06:47.858 }, 00:06:47.858 { 00:06:47.858 "dma_device_id": "system", 00:06:47.858 "dma_device_type": 1 00:06:47.858 }, 00:06:47.858 { 00:06:47.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.858 "dma_device_type": 2 00:06:47.858 } 00:06:47.858 ], 00:06:47.858 "driver_specific": { 00:06:47.858 "raid": { 00:06:47.858 "uuid": "3fc6b04c-b791-43eb-81bc-1a68b87c4b7c", 00:06:47.858 "strip_size_kb": 64, 00:06:47.858 "state": "online", 00:06:47.858 "raid_level": "raid0", 00:06:47.858 "superblock": true, 00:06:47.858 "num_base_bdevs": 2, 00:06:47.858 "num_base_bdevs_discovered": 2, 00:06:47.858 "num_base_bdevs_operational": 2, 00:06:47.858 "base_bdevs_list": [ 00:06:47.858 { 00:06:47.858 "name": "BaseBdev1", 00:06:47.858 "uuid": "3749b8e5-84f7-4ce9-9ebc-7b2c6a401325", 00:06:47.858 "is_configured": true, 00:06:47.858 "data_offset": 2048, 00:06:47.858 "data_size": 63488 00:06:47.858 }, 00:06:47.858 { 00:06:47.858 "name": "BaseBdev2", 00:06:47.858 "uuid": "84b539af-0538-4cce-87ea-9349745215ee", 00:06:47.858 "is_configured": true, 00:06:47.858 "data_offset": 2048, 00:06:47.858 "data_size": 63488 00:06:47.858 } 00:06:47.858 ] 00:06:47.858 } 00:06:47.858 } 00:06:47.858 }' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:47.858 BaseBdev2' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.858 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:47.858 [2024-12-07 01:50:53.307288] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:47.858 [2024-12-07 01:50:53.307360] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:47.858 [2024-12-07 01:50:53.307422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.118 "name": "Existed_Raid", 00:06:48.118 "uuid": "3fc6b04c-b791-43eb-81bc-1a68b87c4b7c", 00:06:48.118 "strip_size_kb": 64, 00:06:48.118 "state": "offline", 00:06:48.118 "raid_level": "raid0", 00:06:48.118 "superblock": true, 00:06:48.118 "num_base_bdevs": 2, 00:06:48.118 "num_base_bdevs_discovered": 1, 00:06:48.118 "num_base_bdevs_operational": 1, 00:06:48.118 "base_bdevs_list": [ 00:06:48.118 { 00:06:48.118 "name": null, 00:06:48.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:48.118 "is_configured": false, 00:06:48.118 "data_offset": 0, 00:06:48.118 "data_size": 63488 00:06:48.118 }, 00:06:48.118 { 00:06:48.118 "name": "BaseBdev2", 00:06:48.118 "uuid": "84b539af-0538-4cce-87ea-9349745215ee", 00:06:48.118 "is_configured": true, 00:06:48.118 "data_offset": 2048, 00:06:48.118 "data_size": 63488 00:06:48.118 } 00:06:48.118 ] 00:06:48.118 }' 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.118 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.378 [2024-12-07 01:50:53.793712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:48.378 [2024-12-07 01:50:53.793818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:48.378 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72097 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72097 ']' 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72097 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72097 00:06:48.638 killing process with pid 72097 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72097' 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72097 00:06:48.638 [2024-12-07 01:50:53.900722] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.638 01:50:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72097 00:06:48.638 [2024-12-07 01:50:53.901707] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.897 01:50:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:48.897 ************************************ 00:06:48.897 END TEST raid_state_function_test_sb 00:06:48.897 ************************************ 00:06:48.897 00:06:48.897 real 0m3.684s 00:06:48.897 user 0m5.752s 00:06:48.897 sys 0m0.713s 00:06:48.897 01:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.897 01:50:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:48.897 01:50:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:48.897 01:50:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:48.897 01:50:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.897 01:50:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.897 ************************************ 00:06:48.897 START TEST raid_superblock_test 00:06:48.897 ************************************ 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:48.897 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72332 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72332 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72332 ']' 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.898 01:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.898 [2024-12-07 01:50:54.282307] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:48.898 [2024-12-07 01:50:54.282426] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72332 ] 00:06:49.155 [2024-12-07 01:50:54.426720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.155 [2024-12-07 01:50:54.471025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.155 [2024-12-07 01:50:54.512000] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.155 [2024-12-07 01:50:54.512134] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.722 malloc1 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.722 [2024-12-07 01:50:55.145543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:49.722 [2024-12-07 01:50:55.145698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.722 [2024-12-07 01:50:55.145737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:49.722 [2024-12-07 01:50:55.145784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.722 [2024-12-07 01:50:55.147823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.722 [2024-12-07 01:50:55.147893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:49.722 pt1 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.722 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.722 malloc2 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.980 [2024-12-07 01:50:55.188630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:49.980 [2024-12-07 01:50:55.188702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:49.980 [2024-12-07 01:50:55.188720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:49.980 [2024-12-07 01:50:55.188734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:49.980 [2024-12-07 01:50:55.191150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:49.980 [2024-12-07 01:50:55.191187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:49.980 pt2 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.980 [2024-12-07 01:50:55.200636] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:49.980 [2024-12-07 01:50:55.202502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:49.980 [2024-12-07 01:50:55.202630] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:49.980 [2024-12-07 01:50:55.202645] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:49.980 [2024-12-07 01:50:55.202938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:49.980 [2024-12-07 01:50:55.203073] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:49.980 [2024-12-07 01:50:55.203084] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:49.980 [2024-12-07 01:50:55.203212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:49.980 "name": "raid_bdev1", 00:06:49.980 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:49.980 "strip_size_kb": 64, 00:06:49.980 "state": "online", 00:06:49.980 "raid_level": "raid0", 00:06:49.980 "superblock": true, 00:06:49.980 "num_base_bdevs": 2, 00:06:49.980 "num_base_bdevs_discovered": 2, 00:06:49.980 "num_base_bdevs_operational": 2, 00:06:49.980 "base_bdevs_list": [ 00:06:49.980 { 00:06:49.980 "name": "pt1", 00:06:49.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:49.980 "is_configured": true, 00:06:49.980 "data_offset": 2048, 00:06:49.980 "data_size": 63488 00:06:49.980 }, 00:06:49.980 { 00:06:49.980 "name": "pt2", 00:06:49.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:49.980 "is_configured": true, 00:06:49.980 "data_offset": 2048, 00:06:49.980 "data_size": 63488 00:06:49.980 } 00:06:49.980 ] 00:06:49.980 }' 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:49.980 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.239 [2024-12-07 01:50:55.644189] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:50.239 "name": "raid_bdev1", 00:06:50.239 "aliases": [ 00:06:50.239 "000291d4-8938-493d-9348-629d370e914e" 00:06:50.239 ], 00:06:50.239 "product_name": "Raid Volume", 00:06:50.239 "block_size": 512, 00:06:50.239 "num_blocks": 126976, 00:06:50.239 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:50.239 "assigned_rate_limits": { 00:06:50.239 "rw_ios_per_sec": 0, 00:06:50.239 "rw_mbytes_per_sec": 0, 00:06:50.239 "r_mbytes_per_sec": 0, 00:06:50.239 "w_mbytes_per_sec": 0 00:06:50.239 }, 00:06:50.239 "claimed": false, 00:06:50.239 "zoned": false, 00:06:50.239 "supported_io_types": { 00:06:50.239 "read": true, 00:06:50.239 "write": true, 00:06:50.239 "unmap": true, 00:06:50.239 "flush": true, 00:06:50.239 "reset": true, 00:06:50.239 "nvme_admin": false, 00:06:50.239 "nvme_io": false, 00:06:50.239 "nvme_io_md": false, 00:06:50.239 "write_zeroes": true, 00:06:50.239 "zcopy": false, 00:06:50.239 "get_zone_info": false, 00:06:50.239 "zone_management": false, 00:06:50.239 "zone_append": false, 00:06:50.239 "compare": false, 00:06:50.239 "compare_and_write": false, 00:06:50.239 "abort": false, 00:06:50.239 "seek_hole": false, 00:06:50.239 "seek_data": false, 00:06:50.239 "copy": false, 00:06:50.239 "nvme_iov_md": false 00:06:50.239 }, 00:06:50.239 "memory_domains": [ 00:06:50.239 { 00:06:50.239 "dma_device_id": "system", 00:06:50.239 "dma_device_type": 1 00:06:50.239 }, 00:06:50.239 { 00:06:50.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.239 "dma_device_type": 2 00:06:50.239 }, 00:06:50.239 { 00:06:50.239 "dma_device_id": "system", 00:06:50.239 "dma_device_type": 1 00:06:50.239 }, 00:06:50.239 { 00:06:50.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:50.239 "dma_device_type": 2 00:06:50.239 } 00:06:50.239 ], 00:06:50.239 "driver_specific": { 00:06:50.239 "raid": { 00:06:50.239 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:50.239 "strip_size_kb": 64, 00:06:50.239 "state": "online", 00:06:50.239 "raid_level": "raid0", 00:06:50.239 "superblock": true, 00:06:50.239 "num_base_bdevs": 2, 00:06:50.239 "num_base_bdevs_discovered": 2, 00:06:50.239 "num_base_bdevs_operational": 2, 00:06:50.239 "base_bdevs_list": [ 00:06:50.239 { 00:06:50.239 "name": "pt1", 00:06:50.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:50.239 "is_configured": true, 00:06:50.239 "data_offset": 2048, 00:06:50.239 "data_size": 63488 00:06:50.239 }, 00:06:50.239 { 00:06:50.239 "name": "pt2", 00:06:50.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:50.239 "is_configured": true, 00:06:50.239 "data_offset": 2048, 00:06:50.239 "data_size": 63488 00:06:50.239 } 00:06:50.239 ] 00:06:50.239 } 00:06:50.239 } 00:06:50.239 }' 00:06:50.239 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:50.500 pt2' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 [2024-12-07 01:50:55.831758] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=000291d4-8938-493d-9348-629d370e914e 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 000291d4-8938-493d-9348-629d370e914e ']' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 [2024-12-07 01:50:55.875427] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:50.500 [2024-12-07 01:50:55.875496] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:50.500 [2024-12-07 01:50:55.875602] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:50.500 [2024-12-07 01:50:55.875693] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:50.500 [2024-12-07 01:50:55.875743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.500 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.501 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:50.501 01:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:50.501 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.501 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 01:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 [2024-12-07 01:50:56.011224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:50.761 [2024-12-07 01:50:56.013149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:50.761 [2024-12-07 01:50:56.013250] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:50.761 [2024-12-07 01:50:56.013326] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:50.761 [2024-12-07 01:50:56.013383] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:50.761 [2024-12-07 01:50:56.013415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:50.761 request: 00:06:50.761 { 00:06:50.761 "name": "raid_bdev1", 00:06:50.761 "raid_level": "raid0", 00:06:50.761 "base_bdevs": [ 00:06:50.761 "malloc1", 00:06:50.761 "malloc2" 00:06:50.761 ], 00:06:50.761 "strip_size_kb": 64, 00:06:50.761 "superblock": false, 00:06:50.761 "method": "bdev_raid_create", 00:06:50.761 "req_id": 1 00:06:50.761 } 00:06:50.761 Got JSON-RPC error response 00:06:50.761 response: 00:06:50.761 { 00:06:50.761 "code": -17, 00:06:50.761 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:50.761 } 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.761 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.761 [2024-12-07 01:50:56.063099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:50.761 [2024-12-07 01:50:56.063191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:50.761 [2024-12-07 01:50:56.063252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:50.761 [2024-12-07 01:50:56.063285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:50.761 [2024-12-07 01:50:56.065512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:50.761 [2024-12-07 01:50:56.065576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:50.761 [2024-12-07 01:50:56.065689] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:50.761 [2024-12-07 01:50:56.065739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:50.761 pt1 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.762 "name": "raid_bdev1", 00:06:50.762 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:50.762 "strip_size_kb": 64, 00:06:50.762 "state": "configuring", 00:06:50.762 "raid_level": "raid0", 00:06:50.762 "superblock": true, 00:06:50.762 "num_base_bdevs": 2, 00:06:50.762 "num_base_bdevs_discovered": 1, 00:06:50.762 "num_base_bdevs_operational": 2, 00:06:50.762 "base_bdevs_list": [ 00:06:50.762 { 00:06:50.762 "name": "pt1", 00:06:50.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:50.762 "is_configured": true, 00:06:50.762 "data_offset": 2048, 00:06:50.762 "data_size": 63488 00:06:50.762 }, 00:06:50.762 { 00:06:50.762 "name": null, 00:06:50.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:50.762 "is_configured": false, 00:06:50.762 "data_offset": 2048, 00:06:50.762 "data_size": 63488 00:06:50.762 } 00:06:50.762 ] 00:06:50.762 }' 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.762 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.330 [2024-12-07 01:50:56.494394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:51.330 [2024-12-07 01:50:56.494505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:51.330 [2024-12-07 01:50:56.494559] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:51.330 [2024-12-07 01:50:56.494587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:51.330 [2024-12-07 01:50:56.495036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:51.330 [2024-12-07 01:50:56.495097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:51.330 [2024-12-07 01:50:56.495202] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:51.330 [2024-12-07 01:50:56.495251] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:51.330 [2024-12-07 01:50:56.495364] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:51.330 [2024-12-07 01:50:56.495400] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:51.330 [2024-12-07 01:50:56.495682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:51.330 [2024-12-07 01:50:56.495847] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:51.330 [2024-12-07 01:50:56.495894] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:51.330 [2024-12-07 01:50:56.496045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.330 pt2 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.330 "name": "raid_bdev1", 00:06:51.330 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:51.330 "strip_size_kb": 64, 00:06:51.330 "state": "online", 00:06:51.330 "raid_level": "raid0", 00:06:51.330 "superblock": true, 00:06:51.330 "num_base_bdevs": 2, 00:06:51.330 "num_base_bdevs_discovered": 2, 00:06:51.330 "num_base_bdevs_operational": 2, 00:06:51.330 "base_bdevs_list": [ 00:06:51.330 { 00:06:51.330 "name": "pt1", 00:06:51.330 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:51.330 "is_configured": true, 00:06:51.330 "data_offset": 2048, 00:06:51.330 "data_size": 63488 00:06:51.330 }, 00:06:51.330 { 00:06:51.330 "name": "pt2", 00:06:51.330 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:51.330 "is_configured": true, 00:06:51.330 "data_offset": 2048, 00:06:51.330 "data_size": 63488 00:06:51.330 } 00:06:51.330 ] 00:06:51.330 }' 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.330 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:51.589 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:51.590 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.590 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.590 [2024-12-07 01:50:56.921878] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.590 01:50:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.590 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:51.590 "name": "raid_bdev1", 00:06:51.590 "aliases": [ 00:06:51.590 "000291d4-8938-493d-9348-629d370e914e" 00:06:51.590 ], 00:06:51.590 "product_name": "Raid Volume", 00:06:51.590 "block_size": 512, 00:06:51.590 "num_blocks": 126976, 00:06:51.590 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:51.590 "assigned_rate_limits": { 00:06:51.590 "rw_ios_per_sec": 0, 00:06:51.590 "rw_mbytes_per_sec": 0, 00:06:51.590 "r_mbytes_per_sec": 0, 00:06:51.590 "w_mbytes_per_sec": 0 00:06:51.590 }, 00:06:51.590 "claimed": false, 00:06:51.590 "zoned": false, 00:06:51.590 "supported_io_types": { 00:06:51.590 "read": true, 00:06:51.590 "write": true, 00:06:51.590 "unmap": true, 00:06:51.590 "flush": true, 00:06:51.590 "reset": true, 00:06:51.590 "nvme_admin": false, 00:06:51.590 "nvme_io": false, 00:06:51.590 "nvme_io_md": false, 00:06:51.590 "write_zeroes": true, 00:06:51.590 "zcopy": false, 00:06:51.590 "get_zone_info": false, 00:06:51.590 "zone_management": false, 00:06:51.590 "zone_append": false, 00:06:51.590 "compare": false, 00:06:51.590 "compare_and_write": false, 00:06:51.590 "abort": false, 00:06:51.590 "seek_hole": false, 00:06:51.590 "seek_data": false, 00:06:51.590 "copy": false, 00:06:51.590 "nvme_iov_md": false 00:06:51.590 }, 00:06:51.590 "memory_domains": [ 00:06:51.590 { 00:06:51.590 "dma_device_id": "system", 00:06:51.590 "dma_device_type": 1 00:06:51.590 }, 00:06:51.590 { 00:06:51.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.590 "dma_device_type": 2 00:06:51.590 }, 00:06:51.590 { 00:06:51.590 "dma_device_id": "system", 00:06:51.590 "dma_device_type": 1 00:06:51.590 }, 00:06:51.590 { 00:06:51.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.590 "dma_device_type": 2 00:06:51.590 } 00:06:51.590 ], 00:06:51.590 "driver_specific": { 00:06:51.590 "raid": { 00:06:51.590 "uuid": "000291d4-8938-493d-9348-629d370e914e", 00:06:51.590 "strip_size_kb": 64, 00:06:51.590 "state": "online", 00:06:51.590 "raid_level": "raid0", 00:06:51.590 "superblock": true, 00:06:51.590 "num_base_bdevs": 2, 00:06:51.590 "num_base_bdevs_discovered": 2, 00:06:51.590 "num_base_bdevs_operational": 2, 00:06:51.590 "base_bdevs_list": [ 00:06:51.590 { 00:06:51.590 "name": "pt1", 00:06:51.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:51.590 "is_configured": true, 00:06:51.590 "data_offset": 2048, 00:06:51.590 "data_size": 63488 00:06:51.590 }, 00:06:51.590 { 00:06:51.590 "name": "pt2", 00:06:51.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:51.590 "is_configured": true, 00:06:51.590 "data_offset": 2048, 00:06:51.590 "data_size": 63488 00:06:51.590 } 00:06:51.590 ] 00:06:51.590 } 00:06:51.590 } 00:06:51.590 }' 00:06:51.590 01:50:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:51.590 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:51.590 pt2' 00:06:51.590 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.851 [2024-12-07 01:50:57.169422] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 000291d4-8938-493d-9348-629d370e914e '!=' 000291d4-8938-493d-9348-629d370e914e ']' 00:06:51.851 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72332 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72332 ']' 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72332 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72332 00:06:51.852 killing process with pid 72332 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72332' 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72332 00:06:51.852 [2024-12-07 01:50:57.257217] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.852 [2024-12-07 01:50:57.257292] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:51.852 [2024-12-07 01:50:57.257341] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:51.852 [2024-12-07 01:50:57.257350] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:51.852 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72332 00:06:51.852 [2024-12-07 01:50:57.280604] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:52.112 01:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:52.112 ************************************ 00:06:52.112 END TEST raid_superblock_test 00:06:52.112 00:06:52.112 real 0m3.323s 00:06:52.112 user 0m5.146s 00:06:52.112 sys 0m0.696s 00:06:52.112 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.112 01:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.112 ************************************ 00:06:52.374 01:50:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:52.374 01:50:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:52.374 01:50:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.374 01:50:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:52.374 ************************************ 00:06:52.374 START TEST raid_read_error_test 00:06:52.374 ************************************ 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UAyCgnwqvS 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72533 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72533 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 72533 ']' 00:06:52.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.374 01:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.374 [2024-12-07 01:50:57.683545] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:52.374 [2024-12-07 01:50:57.684168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72533 ] 00:06:52.374 [2024-12-07 01:50:57.827076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.634 [2024-12-07 01:50:57.872400] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.634 [2024-12-07 01:50:57.913647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:52.634 [2024-12-07 01:50:57.913689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 BaseBdev1_malloc 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 true 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 [2024-12-07 01:50:58.551169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:53.203 [2024-12-07 01:50:58.551248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.203 [2024-12-07 01:50:58.551276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:53.203 [2024-12-07 01:50:58.551287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.203 [2024-12-07 01:50:58.553453] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.203 [2024-12-07 01:50:58.553488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:53.203 BaseBdev1 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 BaseBdev2_malloc 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 true 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.203 [2024-12-07 01:50:58.609510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:53.203 [2024-12-07 01:50:58.609657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:53.203 [2024-12-07 01:50:58.609737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:53.203 [2024-12-07 01:50:58.609792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:53.203 [2024-12-07 01:50:58.612341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:53.203 [2024-12-07 01:50:58.612420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:53.203 BaseBdev2 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.203 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.204 [2024-12-07 01:50:58.621524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:53.204 [2024-12-07 01:50:58.623400] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:53.204 [2024-12-07 01:50:58.623629] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:53.204 [2024-12-07 01:50:58.623689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:53.204 [2024-12-07 01:50:58.623950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:53.204 [2024-12-07 01:50:58.624127] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:53.204 [2024-12-07 01:50:58.624174] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:53.204 [2024-12-07 01:50:58.624345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.204 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.464 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.464 "name": "raid_bdev1", 00:06:53.464 "uuid": "c8fe2ba8-6cae-442a-b867-6f5be4c39a04", 00:06:53.464 "strip_size_kb": 64, 00:06:53.464 "state": "online", 00:06:53.464 "raid_level": "raid0", 00:06:53.464 "superblock": true, 00:06:53.464 "num_base_bdevs": 2, 00:06:53.464 "num_base_bdevs_discovered": 2, 00:06:53.464 "num_base_bdevs_operational": 2, 00:06:53.464 "base_bdevs_list": [ 00:06:53.464 { 00:06:53.464 "name": "BaseBdev1", 00:06:53.464 "uuid": "c7911695-6b05-5223-8368-89dc59fe8efb", 00:06:53.464 "is_configured": true, 00:06:53.464 "data_offset": 2048, 00:06:53.464 "data_size": 63488 00:06:53.464 }, 00:06:53.464 { 00:06:53.464 "name": "BaseBdev2", 00:06:53.464 "uuid": "6559faf9-57a6-57de-bf6c-2533a86e0c34", 00:06:53.464 "is_configured": true, 00:06:53.464 "data_offset": 2048, 00:06:53.464 "data_size": 63488 00:06:53.464 } 00:06:53.464 ] 00:06:53.464 }' 00:06:53.464 01:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.464 01:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.724 01:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:53.724 01:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:53.724 [2024-12-07 01:50:59.141009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.664 "name": "raid_bdev1", 00:06:54.664 "uuid": "c8fe2ba8-6cae-442a-b867-6f5be4c39a04", 00:06:54.664 "strip_size_kb": 64, 00:06:54.664 "state": "online", 00:06:54.664 "raid_level": "raid0", 00:06:54.664 "superblock": true, 00:06:54.664 "num_base_bdevs": 2, 00:06:54.664 "num_base_bdevs_discovered": 2, 00:06:54.664 "num_base_bdevs_operational": 2, 00:06:54.664 "base_bdevs_list": [ 00:06:54.664 { 00:06:54.664 "name": "BaseBdev1", 00:06:54.664 "uuid": "c7911695-6b05-5223-8368-89dc59fe8efb", 00:06:54.664 "is_configured": true, 00:06:54.664 "data_offset": 2048, 00:06:54.664 "data_size": 63488 00:06:54.664 }, 00:06:54.664 { 00:06:54.664 "name": "BaseBdev2", 00:06:54.664 "uuid": "6559faf9-57a6-57de-bf6c-2533a86e0c34", 00:06:54.664 "is_configured": true, 00:06:54.664 "data_offset": 2048, 00:06:54.664 "data_size": 63488 00:06:54.664 } 00:06:54.664 ] 00:06:54.664 }' 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.664 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.230 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.231 [2024-12-07 01:51:00.500566] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:55.231 [2024-12-07 01:51:00.500644] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:55.231 [2024-12-07 01:51:00.503219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:55.231 [2024-12-07 01:51:00.503302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.231 [2024-12-07 01:51:00.503356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:55.231 [2024-12-07 01:51:00.503409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:55.231 { 00:06:55.231 "results": [ 00:06:55.231 { 00:06:55.231 "job": "raid_bdev1", 00:06:55.231 "core_mask": "0x1", 00:06:55.231 "workload": "randrw", 00:06:55.231 "percentage": 50, 00:06:55.231 "status": "finished", 00:06:55.231 "queue_depth": 1, 00:06:55.231 "io_size": 131072, 00:06:55.231 "runtime": 1.360447, 00:06:55.231 "iops": 17563.344988816178, 00:06:55.231 "mibps": 2195.418123602022, 00:06:55.231 "io_failed": 1, 00:06:55.231 "io_timeout": 0, 00:06:55.231 "avg_latency_us": 78.73052581755515, 00:06:55.231 "min_latency_us": 24.593886462882097, 00:06:55.231 "max_latency_us": 1402.2986899563318 00:06:55.231 } 00:06:55.231 ], 00:06:55.231 "core_count": 1 00:06:55.231 } 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72533 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 72533 ']' 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 72533 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72533 00:06:55.231 killing process with pid 72533 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72533' 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 72533 00:06:55.231 [2024-12-07 01:51:00.549287] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:55.231 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 72533 00:06:55.231 [2024-12-07 01:51:00.565113] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UAyCgnwqvS 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:06:55.490 00:06:55.490 real 0m3.220s 00:06:55.490 user 0m4.112s 00:06:55.490 sys 0m0.472s 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.490 01:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.490 ************************************ 00:06:55.490 END TEST raid_read_error_test 00:06:55.490 ************************************ 00:06:55.490 01:51:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:55.490 01:51:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:55.490 01:51:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.490 01:51:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.490 ************************************ 00:06:55.490 START TEST raid_write_error_test 00:06:55.490 ************************************ 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YqEoSGo9lB 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72662 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72662 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 72662 ']' 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.490 01:51:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.748 [2024-12-07 01:51:00.982468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:55.748 [2024-12-07 01:51:00.982669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72662 ] 00:06:55.748 [2024-12-07 01:51:01.125988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.748 [2024-12-07 01:51:01.170453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.007 [2024-12-07 01:51:01.211529] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.007 [2024-12-07 01:51:01.211647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.576 BaseBdev1_malloc 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.576 true 00:06:56.576 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 [2024-12-07 01:51:01.817202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:56.577 [2024-12-07 01:51:01.817310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.577 [2024-12-07 01:51:01.817336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:56.577 [2024-12-07 01:51:01.817345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.577 [2024-12-07 01:51:01.819482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.577 [2024-12-07 01:51:01.819520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:56.577 BaseBdev1 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 BaseBdev2_malloc 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 true 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 [2024-12-07 01:51:01.873815] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:56.577 [2024-12-07 01:51:01.873960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:56.577 [2024-12-07 01:51:01.874001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:56.577 [2024-12-07 01:51:01.874016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:56.577 [2024-12-07 01:51:01.877221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:56.577 [2024-12-07 01:51:01.877271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:56.577 BaseBdev2 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 [2024-12-07 01:51:01.885876] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.577 [2024-12-07 01:51:01.887948] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:56.577 [2024-12-07 01:51:01.888193] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:56.577 [2024-12-07 01:51:01.888252] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:56.577 [2024-12-07 01:51:01.888560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:56.577 [2024-12-07 01:51:01.888747] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:56.577 [2024-12-07 01:51:01.888805] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:56.577 [2024-12-07 01:51:01.888974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.577 "name": "raid_bdev1", 00:06:56.577 "uuid": "07b5f313-1f8a-41dc-b6fb-5c9be8b9d7ac", 00:06:56.577 "strip_size_kb": 64, 00:06:56.577 "state": "online", 00:06:56.577 "raid_level": "raid0", 00:06:56.577 "superblock": true, 00:06:56.577 "num_base_bdevs": 2, 00:06:56.577 "num_base_bdevs_discovered": 2, 00:06:56.577 "num_base_bdevs_operational": 2, 00:06:56.577 "base_bdevs_list": [ 00:06:56.577 { 00:06:56.577 "name": "BaseBdev1", 00:06:56.577 "uuid": "4d2abc9c-266d-56d1-8c59-60aa72d54efe", 00:06:56.577 "is_configured": true, 00:06:56.577 "data_offset": 2048, 00:06:56.577 "data_size": 63488 00:06:56.577 }, 00:06:56.577 { 00:06:56.577 "name": "BaseBdev2", 00:06:56.577 "uuid": "376fcebd-e43c-5ca4-8b3b-150d3935a785", 00:06:56.577 "is_configured": true, 00:06:56.577 "data_offset": 2048, 00:06:56.577 "data_size": 63488 00:06:56.577 } 00:06:56.577 ] 00:06:56.577 }' 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.577 01:51:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.864 01:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:56.864 01:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:57.122 [2024-12-07 01:51:02.381367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.069 "name": "raid_bdev1", 00:06:58.069 "uuid": "07b5f313-1f8a-41dc-b6fb-5c9be8b9d7ac", 00:06:58.069 "strip_size_kb": 64, 00:06:58.069 "state": "online", 00:06:58.069 "raid_level": "raid0", 00:06:58.069 "superblock": true, 00:06:58.069 "num_base_bdevs": 2, 00:06:58.069 "num_base_bdevs_discovered": 2, 00:06:58.069 "num_base_bdevs_operational": 2, 00:06:58.069 "base_bdevs_list": [ 00:06:58.069 { 00:06:58.069 "name": "BaseBdev1", 00:06:58.069 "uuid": "4d2abc9c-266d-56d1-8c59-60aa72d54efe", 00:06:58.069 "is_configured": true, 00:06:58.069 "data_offset": 2048, 00:06:58.069 "data_size": 63488 00:06:58.069 }, 00:06:58.069 { 00:06:58.069 "name": "BaseBdev2", 00:06:58.069 "uuid": "376fcebd-e43c-5ca4-8b3b-150d3935a785", 00:06:58.069 "is_configured": true, 00:06:58.069 "data_offset": 2048, 00:06:58.069 "data_size": 63488 00:06:58.069 } 00:06:58.069 ] 00:06:58.069 }' 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.069 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.328 [2024-12-07 01:51:03.776855] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:58.328 [2024-12-07 01:51:03.776936] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.328 [2024-12-07 01:51:03.779445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.328 [2024-12-07 01:51:03.779548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.328 [2024-12-07 01:51:03.779605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.328 [2024-12-07 01:51:03.779647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, sta{ 00:06:58.328 "results": [ 00:06:58.328 { 00:06:58.328 "job": "raid_bdev1", 00:06:58.328 "core_mask": "0x1", 00:06:58.328 "workload": "randrw", 00:06:58.328 "percentage": 50, 00:06:58.328 "status": "finished", 00:06:58.328 "queue_depth": 1, 00:06:58.328 "io_size": 131072, 00:06:58.328 "runtime": 1.396451, 00:06:58.328 "iops": 17633.271772514752, 00:06:58.328 "mibps": 2204.158971564344, 00:06:58.328 "io_failed": 1, 00:06:58.328 "io_timeout": 0, 00:06:58.328 "avg_latency_us": 78.32019045507946, 00:06:58.328 "min_latency_us": 24.593886462882097, 00:06:58.328 "max_latency_us": 1423.7624454148472 00:06:58.328 } 00:06:58.328 ], 00:06:58.328 "core_count": 1 00:06:58.328 } 00:06:58.328 te offline 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72662 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 72662 ']' 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 72662 00:06:58.328 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72662 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72662' 00:06:58.588 killing process with pid 72662 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 72662 00:06:58.588 [2024-12-07 01:51:03.816361] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.588 01:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 72662 00:06:58.588 [2024-12-07 01:51:03.831417] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YqEoSGo9lB 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:06:58.848 00:06:58.848 real 0m3.190s 00:06:58.848 user 0m4.049s 00:06:58.848 sys 0m0.475s 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.848 01:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 ************************************ 00:06:58.848 END TEST raid_write_error_test 00:06:58.848 ************************************ 00:06:58.848 01:51:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:58.848 01:51:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:58.848 01:51:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:58.848 01:51:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.848 01:51:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:58.848 ************************************ 00:06:58.848 START TEST raid_state_function_test 00:06:58.848 ************************************ 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.848 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:58.849 Process raid pid: 72789 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72789 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72789' 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72789 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 72789 ']' 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.849 01:51:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.849 [2024-12-07 01:51:04.234053] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:58.849 [2024-12-07 01:51:04.234266] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.108 [2024-12-07 01:51:04.361922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.108 [2024-12-07 01:51:04.405736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.108 [2024-12-07 01:51:04.447072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.108 [2024-12-07 01:51:04.447104] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:59.677 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.677 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:06:59.677 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.677 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.677 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.677 [2024-12-07 01:51:05.083718] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:59.677 [2024-12-07 01:51:05.083772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:59.677 [2024-12-07 01:51:05.083792] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.677 [2024-12-07 01:51:05.083803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.677 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.678 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.936 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.936 "name": "Existed_Raid", 00:06:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.936 "strip_size_kb": 64, 00:06:59.936 "state": "configuring", 00:06:59.936 "raid_level": "concat", 00:06:59.936 "superblock": false, 00:06:59.936 "num_base_bdevs": 2, 00:06:59.936 "num_base_bdevs_discovered": 0, 00:06:59.936 "num_base_bdevs_operational": 2, 00:06:59.936 "base_bdevs_list": [ 00:06:59.936 { 00:06:59.936 "name": "BaseBdev1", 00:06:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.936 "is_configured": false, 00:06:59.936 "data_offset": 0, 00:06:59.936 "data_size": 0 00:06:59.936 }, 00:06:59.936 { 00:06:59.936 "name": "BaseBdev2", 00:06:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.936 "is_configured": false, 00:06:59.936 "data_offset": 0, 00:06:59.936 "data_size": 0 00:06:59.936 } 00:06:59.936 ] 00:06:59.936 }' 00:06:59.936 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.936 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 [2024-12-07 01:51:05.530812] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.196 [2024-12-07 01:51:05.530907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 [2024-12-07 01:51:05.538808] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:00.196 [2024-12-07 01:51:05.538889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:00.196 [2024-12-07 01:51:05.538927] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.196 [2024-12-07 01:51:05.538949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 [2024-12-07 01:51:05.555506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.196 BaseBdev1 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 [ 00:07:00.196 { 00:07:00.196 "name": "BaseBdev1", 00:07:00.196 "aliases": [ 00:07:00.196 "c73163e5-9536-43d6-a164-6830d5797ccb" 00:07:00.196 ], 00:07:00.196 "product_name": "Malloc disk", 00:07:00.196 "block_size": 512, 00:07:00.196 "num_blocks": 65536, 00:07:00.196 "uuid": "c73163e5-9536-43d6-a164-6830d5797ccb", 00:07:00.196 "assigned_rate_limits": { 00:07:00.196 "rw_ios_per_sec": 0, 00:07:00.196 "rw_mbytes_per_sec": 0, 00:07:00.196 "r_mbytes_per_sec": 0, 00:07:00.196 "w_mbytes_per_sec": 0 00:07:00.196 }, 00:07:00.196 "claimed": true, 00:07:00.196 "claim_type": "exclusive_write", 00:07:00.196 "zoned": false, 00:07:00.196 "supported_io_types": { 00:07:00.196 "read": true, 00:07:00.196 "write": true, 00:07:00.196 "unmap": true, 00:07:00.196 "flush": true, 00:07:00.196 "reset": true, 00:07:00.196 "nvme_admin": false, 00:07:00.196 "nvme_io": false, 00:07:00.196 "nvme_io_md": false, 00:07:00.196 "write_zeroes": true, 00:07:00.196 "zcopy": true, 00:07:00.196 "get_zone_info": false, 00:07:00.196 "zone_management": false, 00:07:00.196 "zone_append": false, 00:07:00.196 "compare": false, 00:07:00.196 "compare_and_write": false, 00:07:00.196 "abort": true, 00:07:00.196 "seek_hole": false, 00:07:00.196 "seek_data": false, 00:07:00.196 "copy": true, 00:07:00.196 "nvme_iov_md": false 00:07:00.196 }, 00:07:00.196 "memory_domains": [ 00:07:00.196 { 00:07:00.196 "dma_device_id": "system", 00:07:00.196 "dma_device_type": 1 00:07:00.196 }, 00:07:00.196 { 00:07:00.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.196 "dma_device_type": 2 00:07:00.196 } 00:07:00.196 ], 00:07:00.196 "driver_specific": {} 00:07:00.196 } 00:07:00.196 ] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.196 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.196 "name": "Existed_Raid", 00:07:00.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.196 "strip_size_kb": 64, 00:07:00.196 "state": "configuring", 00:07:00.196 "raid_level": "concat", 00:07:00.196 "superblock": false, 00:07:00.196 "num_base_bdevs": 2, 00:07:00.196 "num_base_bdevs_discovered": 1, 00:07:00.196 "num_base_bdevs_operational": 2, 00:07:00.196 "base_bdevs_list": [ 00:07:00.196 { 00:07:00.196 "name": "BaseBdev1", 00:07:00.196 "uuid": "c73163e5-9536-43d6-a164-6830d5797ccb", 00:07:00.196 "is_configured": true, 00:07:00.196 "data_offset": 0, 00:07:00.196 "data_size": 65536 00:07:00.196 }, 00:07:00.196 { 00:07:00.196 "name": "BaseBdev2", 00:07:00.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.196 "is_configured": false, 00:07:00.196 "data_offset": 0, 00:07:00.196 "data_size": 0 00:07:00.196 } 00:07:00.196 ] 00:07:00.197 }' 00:07:00.197 01:51:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.197 01:51:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.764 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:00.764 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.764 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.764 [2024-12-07 01:51:06.034755] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:00.764 [2024-12-07 01:51:06.034902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:00.764 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.765 [2024-12-07 01:51:06.046775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:00.765 [2024-12-07 01:51:06.048659] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:00.765 [2024-12-07 01:51:06.048716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.765 "name": "Existed_Raid", 00:07:00.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.765 "strip_size_kb": 64, 00:07:00.765 "state": "configuring", 00:07:00.765 "raid_level": "concat", 00:07:00.765 "superblock": false, 00:07:00.765 "num_base_bdevs": 2, 00:07:00.765 "num_base_bdevs_discovered": 1, 00:07:00.765 "num_base_bdevs_operational": 2, 00:07:00.765 "base_bdevs_list": [ 00:07:00.765 { 00:07:00.765 "name": "BaseBdev1", 00:07:00.765 "uuid": "c73163e5-9536-43d6-a164-6830d5797ccb", 00:07:00.765 "is_configured": true, 00:07:00.765 "data_offset": 0, 00:07:00.765 "data_size": 65536 00:07:00.765 }, 00:07:00.765 { 00:07:00.765 "name": "BaseBdev2", 00:07:00.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.765 "is_configured": false, 00:07:00.765 "data_offset": 0, 00:07:00.765 "data_size": 0 00:07:00.765 } 00:07:00.765 ] 00:07:00.765 }' 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.765 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.025 [2024-12-07 01:51:06.465109] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.025 [2024-12-07 01:51:06.465392] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:01.025 [2024-12-07 01:51:06.465534] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:01.025 [2024-12-07 01:51:06.466745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:01.025 [2024-12-07 01:51:06.467395] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:01.025 [2024-12-07 01:51:06.467564] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:01.025 [2024-12-07 01:51:06.468324] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.025 BaseBdev2 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.025 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.292 [ 00:07:01.292 { 00:07:01.292 "name": "BaseBdev2", 00:07:01.292 "aliases": [ 00:07:01.292 "47988009-fe3a-4fe0-a903-f555f5e2728f" 00:07:01.292 ], 00:07:01.292 "product_name": "Malloc disk", 00:07:01.292 "block_size": 512, 00:07:01.292 "num_blocks": 65536, 00:07:01.292 "uuid": "47988009-fe3a-4fe0-a903-f555f5e2728f", 00:07:01.292 "assigned_rate_limits": { 00:07:01.292 "rw_ios_per_sec": 0, 00:07:01.292 "rw_mbytes_per_sec": 0, 00:07:01.292 "r_mbytes_per_sec": 0, 00:07:01.292 "w_mbytes_per_sec": 0 00:07:01.292 }, 00:07:01.292 "claimed": true, 00:07:01.292 "claim_type": "exclusive_write", 00:07:01.292 "zoned": false, 00:07:01.292 "supported_io_types": { 00:07:01.292 "read": true, 00:07:01.292 "write": true, 00:07:01.292 "unmap": true, 00:07:01.292 "flush": true, 00:07:01.292 "reset": true, 00:07:01.292 "nvme_admin": false, 00:07:01.292 "nvme_io": false, 00:07:01.292 "nvme_io_md": false, 00:07:01.292 "write_zeroes": true, 00:07:01.292 "zcopy": true, 00:07:01.292 "get_zone_info": false, 00:07:01.292 "zone_management": false, 00:07:01.292 "zone_append": false, 00:07:01.292 "compare": false, 00:07:01.292 "compare_and_write": false, 00:07:01.292 "abort": true, 00:07:01.292 "seek_hole": false, 00:07:01.292 "seek_data": false, 00:07:01.292 "copy": true, 00:07:01.292 "nvme_iov_md": false 00:07:01.292 }, 00:07:01.292 "memory_domains": [ 00:07:01.292 { 00:07:01.292 "dma_device_id": "system", 00:07:01.292 "dma_device_type": 1 00:07:01.292 }, 00:07:01.292 { 00:07:01.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.292 "dma_device_type": 2 00:07:01.292 } 00:07:01.292 ], 00:07:01.292 "driver_specific": {} 00:07:01.292 } 00:07:01.292 ] 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.292 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.293 "name": "Existed_Raid", 00:07:01.293 "uuid": "049ae5ff-2837-4542-b84a-fb11dd1cbed8", 00:07:01.293 "strip_size_kb": 64, 00:07:01.293 "state": "online", 00:07:01.293 "raid_level": "concat", 00:07:01.293 "superblock": false, 00:07:01.293 "num_base_bdevs": 2, 00:07:01.293 "num_base_bdevs_discovered": 2, 00:07:01.293 "num_base_bdevs_operational": 2, 00:07:01.293 "base_bdevs_list": [ 00:07:01.293 { 00:07:01.293 "name": "BaseBdev1", 00:07:01.293 "uuid": "c73163e5-9536-43d6-a164-6830d5797ccb", 00:07:01.293 "is_configured": true, 00:07:01.293 "data_offset": 0, 00:07:01.293 "data_size": 65536 00:07:01.293 }, 00:07:01.293 { 00:07:01.293 "name": "BaseBdev2", 00:07:01.293 "uuid": "47988009-fe3a-4fe0-a903-f555f5e2728f", 00:07:01.293 "is_configured": true, 00:07:01.293 "data_offset": 0, 00:07:01.293 "data_size": 65536 00:07:01.293 } 00:07:01.293 ] 00:07:01.293 }' 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.293 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.553 [2024-12-07 01:51:06.932559] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.553 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.553 "name": "Existed_Raid", 00:07:01.553 "aliases": [ 00:07:01.553 "049ae5ff-2837-4542-b84a-fb11dd1cbed8" 00:07:01.553 ], 00:07:01.553 "product_name": "Raid Volume", 00:07:01.553 "block_size": 512, 00:07:01.553 "num_blocks": 131072, 00:07:01.553 "uuid": "049ae5ff-2837-4542-b84a-fb11dd1cbed8", 00:07:01.553 "assigned_rate_limits": { 00:07:01.553 "rw_ios_per_sec": 0, 00:07:01.553 "rw_mbytes_per_sec": 0, 00:07:01.553 "r_mbytes_per_sec": 0, 00:07:01.553 "w_mbytes_per_sec": 0 00:07:01.553 }, 00:07:01.553 "claimed": false, 00:07:01.553 "zoned": false, 00:07:01.553 "supported_io_types": { 00:07:01.553 "read": true, 00:07:01.553 "write": true, 00:07:01.553 "unmap": true, 00:07:01.553 "flush": true, 00:07:01.553 "reset": true, 00:07:01.553 "nvme_admin": false, 00:07:01.553 "nvme_io": false, 00:07:01.553 "nvme_io_md": false, 00:07:01.553 "write_zeroes": true, 00:07:01.553 "zcopy": false, 00:07:01.553 "get_zone_info": false, 00:07:01.553 "zone_management": false, 00:07:01.553 "zone_append": false, 00:07:01.553 "compare": false, 00:07:01.553 "compare_and_write": false, 00:07:01.553 "abort": false, 00:07:01.553 "seek_hole": false, 00:07:01.553 "seek_data": false, 00:07:01.553 "copy": false, 00:07:01.553 "nvme_iov_md": false 00:07:01.553 }, 00:07:01.553 "memory_domains": [ 00:07:01.553 { 00:07:01.553 "dma_device_id": "system", 00:07:01.553 "dma_device_type": 1 00:07:01.553 }, 00:07:01.553 { 00:07:01.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.553 "dma_device_type": 2 00:07:01.554 }, 00:07:01.554 { 00:07:01.554 "dma_device_id": "system", 00:07:01.554 "dma_device_type": 1 00:07:01.554 }, 00:07:01.554 { 00:07:01.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.554 "dma_device_type": 2 00:07:01.554 } 00:07:01.554 ], 00:07:01.554 "driver_specific": { 00:07:01.554 "raid": { 00:07:01.554 "uuid": "049ae5ff-2837-4542-b84a-fb11dd1cbed8", 00:07:01.554 "strip_size_kb": 64, 00:07:01.554 "state": "online", 00:07:01.554 "raid_level": "concat", 00:07:01.554 "superblock": false, 00:07:01.554 "num_base_bdevs": 2, 00:07:01.554 "num_base_bdevs_discovered": 2, 00:07:01.554 "num_base_bdevs_operational": 2, 00:07:01.554 "base_bdevs_list": [ 00:07:01.554 { 00:07:01.554 "name": "BaseBdev1", 00:07:01.554 "uuid": "c73163e5-9536-43d6-a164-6830d5797ccb", 00:07:01.554 "is_configured": true, 00:07:01.554 "data_offset": 0, 00:07:01.554 "data_size": 65536 00:07:01.554 }, 00:07:01.554 { 00:07:01.554 "name": "BaseBdev2", 00:07:01.554 "uuid": "47988009-fe3a-4fe0-a903-f555f5e2728f", 00:07:01.554 "is_configured": true, 00:07:01.554 "data_offset": 0, 00:07:01.554 "data_size": 65536 00:07:01.554 } 00:07:01.554 ] 00:07:01.554 } 00:07:01.554 } 00:07:01.554 }' 00:07:01.554 01:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:01.814 BaseBdev2' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.814 [2024-12-07 01:51:07.155978] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.814 [2024-12-07 01:51:07.156060] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.814 [2024-12-07 01:51:07.156153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.814 "name": "Existed_Raid", 00:07:01.814 "uuid": "049ae5ff-2837-4542-b84a-fb11dd1cbed8", 00:07:01.814 "strip_size_kb": 64, 00:07:01.814 "state": "offline", 00:07:01.814 "raid_level": "concat", 00:07:01.814 "superblock": false, 00:07:01.814 "num_base_bdevs": 2, 00:07:01.814 "num_base_bdevs_discovered": 1, 00:07:01.814 "num_base_bdevs_operational": 1, 00:07:01.814 "base_bdevs_list": [ 00:07:01.814 { 00:07:01.814 "name": null, 00:07:01.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.814 "is_configured": false, 00:07:01.814 "data_offset": 0, 00:07:01.814 "data_size": 65536 00:07:01.814 }, 00:07:01.814 { 00:07:01.814 "name": "BaseBdev2", 00:07:01.814 "uuid": "47988009-fe3a-4fe0-a903-f555f5e2728f", 00:07:01.814 "is_configured": true, 00:07:01.814 "data_offset": 0, 00:07:01.814 "data_size": 65536 00:07:01.814 } 00:07:01.814 ] 00:07:01.814 }' 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.814 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 [2024-12-07 01:51:07.590854] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:02.386 [2024-12-07 01:51:07.590947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72789 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 72789 ']' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 72789 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72789 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72789' 00:07:02.386 killing process with pid 72789 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 72789 00:07:02.386 [2024-12-07 01:51:07.675251] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.386 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 72789 00:07:02.386 [2024-12-07 01:51:07.676268] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:02.646 00:07:02.646 real 0m3.760s 00:07:02.646 user 0m5.944s 00:07:02.646 sys 0m0.702s 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.646 ************************************ 00:07:02.646 END TEST raid_state_function_test 00:07:02.646 ************************************ 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.646 01:51:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:02.646 01:51:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:02.646 01:51:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.646 01:51:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.646 ************************************ 00:07:02.646 START TEST raid_state_function_test_sb 00:07:02.646 ************************************ 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73031 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73031' 00:07:02.646 Process raid pid: 73031 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73031 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73031 ']' 00:07:02.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.646 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.647 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.647 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.647 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.647 01:51:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.647 [2024-12-07 01:51:08.061084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:02.647 [2024-12-07 01:51:08.061211] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.906 [2024-12-07 01:51:08.205123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.906 [2024-12-07 01:51:08.248504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.906 [2024-12-07 01:51:08.289733] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:02.906 [2024-12-07 01:51:08.289770] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.477 [2024-12-07 01:51:08.898737] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.477 [2024-12-07 01:51:08.898783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.477 [2024-12-07 01:51:08.898795] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.477 [2024-12-07 01:51:08.898805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.477 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.767 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.767 "name": "Existed_Raid", 00:07:03.767 "uuid": "fa6c7988-604a-4ff7-aeb2-e5804f4bb26c", 00:07:03.767 "strip_size_kb": 64, 00:07:03.767 "state": "configuring", 00:07:03.767 "raid_level": "concat", 00:07:03.767 "superblock": true, 00:07:03.767 "num_base_bdevs": 2, 00:07:03.767 "num_base_bdevs_discovered": 0, 00:07:03.767 "num_base_bdevs_operational": 2, 00:07:03.767 "base_bdevs_list": [ 00:07:03.767 { 00:07:03.767 "name": "BaseBdev1", 00:07:03.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.767 "is_configured": false, 00:07:03.767 "data_offset": 0, 00:07:03.767 "data_size": 0 00:07:03.767 }, 00:07:03.767 { 00:07:03.767 "name": "BaseBdev2", 00:07:03.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.767 "is_configured": false, 00:07:03.767 "data_offset": 0, 00:07:03.767 "data_size": 0 00:07:03.767 } 00:07:03.767 ] 00:07:03.767 }' 00:07:03.767 01:51:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.767 01:51:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.027 [2024-12-07 01:51:09.297942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.027 [2024-12-07 01:51:09.298044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.027 [2024-12-07 01:51:09.309920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.027 [2024-12-07 01:51:09.309995] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.027 [2024-12-07 01:51:09.310032] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.027 [2024-12-07 01:51:09.310055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.027 [2024-12-07 01:51:09.330580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.027 BaseBdev1 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.027 [ 00:07:04.027 { 00:07:04.027 "name": "BaseBdev1", 00:07:04.027 "aliases": [ 00:07:04.027 "53911205-b068-4e38-bf9e-bd7524c5aaf4" 00:07:04.027 ], 00:07:04.027 "product_name": "Malloc disk", 00:07:04.027 "block_size": 512, 00:07:04.027 "num_blocks": 65536, 00:07:04.027 "uuid": "53911205-b068-4e38-bf9e-bd7524c5aaf4", 00:07:04.027 "assigned_rate_limits": { 00:07:04.027 "rw_ios_per_sec": 0, 00:07:04.027 "rw_mbytes_per_sec": 0, 00:07:04.027 "r_mbytes_per_sec": 0, 00:07:04.027 "w_mbytes_per_sec": 0 00:07:04.027 }, 00:07:04.027 "claimed": true, 00:07:04.027 "claim_type": "exclusive_write", 00:07:04.027 "zoned": false, 00:07:04.027 "supported_io_types": { 00:07:04.027 "read": true, 00:07:04.027 "write": true, 00:07:04.027 "unmap": true, 00:07:04.027 "flush": true, 00:07:04.027 "reset": true, 00:07:04.027 "nvme_admin": false, 00:07:04.027 "nvme_io": false, 00:07:04.027 "nvme_io_md": false, 00:07:04.027 "write_zeroes": true, 00:07:04.027 "zcopy": true, 00:07:04.027 "get_zone_info": false, 00:07:04.027 "zone_management": false, 00:07:04.027 "zone_append": false, 00:07:04.027 "compare": false, 00:07:04.027 "compare_and_write": false, 00:07:04.027 "abort": true, 00:07:04.027 "seek_hole": false, 00:07:04.027 "seek_data": false, 00:07:04.027 "copy": true, 00:07:04.027 "nvme_iov_md": false 00:07:04.027 }, 00:07:04.027 "memory_domains": [ 00:07:04.027 { 00:07:04.027 "dma_device_id": "system", 00:07:04.027 "dma_device_type": 1 00:07:04.027 }, 00:07:04.027 { 00:07:04.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.027 "dma_device_type": 2 00:07:04.027 } 00:07:04.027 ], 00:07:04.027 "driver_specific": {} 00:07:04.027 } 00:07:04.027 ] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.027 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.028 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.028 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.028 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.028 "name": "Existed_Raid", 00:07:04.028 "uuid": "0a71b067-5789-453a-bc98-d933e64ec0ad", 00:07:04.028 "strip_size_kb": 64, 00:07:04.028 "state": "configuring", 00:07:04.028 "raid_level": "concat", 00:07:04.028 "superblock": true, 00:07:04.028 "num_base_bdevs": 2, 00:07:04.028 "num_base_bdevs_discovered": 1, 00:07:04.028 "num_base_bdevs_operational": 2, 00:07:04.028 "base_bdevs_list": [ 00:07:04.028 { 00:07:04.028 "name": "BaseBdev1", 00:07:04.028 "uuid": "53911205-b068-4e38-bf9e-bd7524c5aaf4", 00:07:04.028 "is_configured": true, 00:07:04.028 "data_offset": 2048, 00:07:04.028 "data_size": 63488 00:07:04.028 }, 00:07:04.028 { 00:07:04.028 "name": "BaseBdev2", 00:07:04.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.028 "is_configured": false, 00:07:04.028 "data_offset": 0, 00:07:04.028 "data_size": 0 00:07:04.028 } 00:07:04.028 ] 00:07:04.028 }' 00:07:04.028 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.028 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.596 [2024-12-07 01:51:09.853721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.596 [2024-12-07 01:51:09.853833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.596 [2024-12-07 01:51:09.865751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.596 [2024-12-07 01:51:09.867561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.596 [2024-12-07 01:51:09.867603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.596 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.597 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.597 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.597 "name": "Existed_Raid", 00:07:04.597 "uuid": "9008b2f8-517d-4268-a000-ae78c2d155da", 00:07:04.597 "strip_size_kb": 64, 00:07:04.597 "state": "configuring", 00:07:04.597 "raid_level": "concat", 00:07:04.597 "superblock": true, 00:07:04.597 "num_base_bdevs": 2, 00:07:04.597 "num_base_bdevs_discovered": 1, 00:07:04.597 "num_base_bdevs_operational": 2, 00:07:04.597 "base_bdevs_list": [ 00:07:04.597 { 00:07:04.597 "name": "BaseBdev1", 00:07:04.597 "uuid": "53911205-b068-4e38-bf9e-bd7524c5aaf4", 00:07:04.597 "is_configured": true, 00:07:04.597 "data_offset": 2048, 00:07:04.597 "data_size": 63488 00:07:04.597 }, 00:07:04.597 { 00:07:04.597 "name": "BaseBdev2", 00:07:04.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.597 "is_configured": false, 00:07:04.597 "data_offset": 0, 00:07:04.597 "data_size": 0 00:07:04.597 } 00:07:04.597 ] 00:07:04.597 }' 00:07:04.597 01:51:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.597 01:51:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.856 [2024-12-07 01:51:10.303320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.856 [2024-12-07 01:51:10.304079] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:04.856 [2024-12-07 01:51:10.304260] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.856 BaseBdev2 00:07:04.856 [2024-12-07 01:51:10.305206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:04.856 [2024-12-07 01:51:10.305775] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.856 [2024-12-07 01:51:10.305953] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:04.856 [2024-12-07 01:51:10.306456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.856 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.116 [ 00:07:05.116 { 00:07:05.116 "name": "BaseBdev2", 00:07:05.116 "aliases": [ 00:07:05.116 "6ceddcfc-e44c-4ec4-97f9-de4689184e08" 00:07:05.116 ], 00:07:05.116 "product_name": "Malloc disk", 00:07:05.116 "block_size": 512, 00:07:05.116 "num_blocks": 65536, 00:07:05.116 "uuid": "6ceddcfc-e44c-4ec4-97f9-de4689184e08", 00:07:05.116 "assigned_rate_limits": { 00:07:05.116 "rw_ios_per_sec": 0, 00:07:05.116 "rw_mbytes_per_sec": 0, 00:07:05.116 "r_mbytes_per_sec": 0, 00:07:05.116 "w_mbytes_per_sec": 0 00:07:05.116 }, 00:07:05.116 "claimed": true, 00:07:05.116 "claim_type": "exclusive_write", 00:07:05.116 "zoned": false, 00:07:05.116 "supported_io_types": { 00:07:05.116 "read": true, 00:07:05.116 "write": true, 00:07:05.116 "unmap": true, 00:07:05.116 "flush": true, 00:07:05.116 "reset": true, 00:07:05.116 "nvme_admin": false, 00:07:05.116 "nvme_io": false, 00:07:05.116 "nvme_io_md": false, 00:07:05.116 "write_zeroes": true, 00:07:05.116 "zcopy": true, 00:07:05.116 "get_zone_info": false, 00:07:05.116 "zone_management": false, 00:07:05.116 "zone_append": false, 00:07:05.116 "compare": false, 00:07:05.116 "compare_and_write": false, 00:07:05.116 "abort": true, 00:07:05.116 "seek_hole": false, 00:07:05.116 "seek_data": false, 00:07:05.116 "copy": true, 00:07:05.116 "nvme_iov_md": false 00:07:05.116 }, 00:07:05.116 "memory_domains": [ 00:07:05.116 { 00:07:05.116 "dma_device_id": "system", 00:07:05.116 "dma_device_type": 1 00:07:05.116 }, 00:07:05.116 { 00:07:05.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.116 "dma_device_type": 2 00:07:05.116 } 00:07:05.116 ], 00:07:05.116 "driver_specific": {} 00:07:05.116 } 00:07:05.116 ] 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.116 "name": "Existed_Raid", 00:07:05.116 "uuid": "9008b2f8-517d-4268-a000-ae78c2d155da", 00:07:05.116 "strip_size_kb": 64, 00:07:05.116 "state": "online", 00:07:05.116 "raid_level": "concat", 00:07:05.116 "superblock": true, 00:07:05.116 "num_base_bdevs": 2, 00:07:05.116 "num_base_bdevs_discovered": 2, 00:07:05.116 "num_base_bdevs_operational": 2, 00:07:05.116 "base_bdevs_list": [ 00:07:05.116 { 00:07:05.116 "name": "BaseBdev1", 00:07:05.116 "uuid": "53911205-b068-4e38-bf9e-bd7524c5aaf4", 00:07:05.116 "is_configured": true, 00:07:05.116 "data_offset": 2048, 00:07:05.116 "data_size": 63488 00:07:05.116 }, 00:07:05.116 { 00:07:05.116 "name": "BaseBdev2", 00:07:05.116 "uuid": "6ceddcfc-e44c-4ec4-97f9-de4689184e08", 00:07:05.116 "is_configured": true, 00:07:05.116 "data_offset": 2048, 00:07:05.116 "data_size": 63488 00:07:05.116 } 00:07:05.116 ] 00:07:05.116 }' 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.116 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.376 [2024-12-07 01:51:10.770754] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:05.376 "name": "Existed_Raid", 00:07:05.376 "aliases": [ 00:07:05.376 "9008b2f8-517d-4268-a000-ae78c2d155da" 00:07:05.376 ], 00:07:05.376 "product_name": "Raid Volume", 00:07:05.376 "block_size": 512, 00:07:05.376 "num_blocks": 126976, 00:07:05.376 "uuid": "9008b2f8-517d-4268-a000-ae78c2d155da", 00:07:05.376 "assigned_rate_limits": { 00:07:05.376 "rw_ios_per_sec": 0, 00:07:05.376 "rw_mbytes_per_sec": 0, 00:07:05.376 "r_mbytes_per_sec": 0, 00:07:05.376 "w_mbytes_per_sec": 0 00:07:05.376 }, 00:07:05.376 "claimed": false, 00:07:05.376 "zoned": false, 00:07:05.376 "supported_io_types": { 00:07:05.376 "read": true, 00:07:05.376 "write": true, 00:07:05.376 "unmap": true, 00:07:05.376 "flush": true, 00:07:05.376 "reset": true, 00:07:05.376 "nvme_admin": false, 00:07:05.376 "nvme_io": false, 00:07:05.376 "nvme_io_md": false, 00:07:05.376 "write_zeroes": true, 00:07:05.376 "zcopy": false, 00:07:05.376 "get_zone_info": false, 00:07:05.376 "zone_management": false, 00:07:05.376 "zone_append": false, 00:07:05.376 "compare": false, 00:07:05.376 "compare_and_write": false, 00:07:05.376 "abort": false, 00:07:05.376 "seek_hole": false, 00:07:05.376 "seek_data": false, 00:07:05.376 "copy": false, 00:07:05.376 "nvme_iov_md": false 00:07:05.376 }, 00:07:05.376 "memory_domains": [ 00:07:05.376 { 00:07:05.376 "dma_device_id": "system", 00:07:05.376 "dma_device_type": 1 00:07:05.376 }, 00:07:05.376 { 00:07:05.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.376 "dma_device_type": 2 00:07:05.376 }, 00:07:05.376 { 00:07:05.376 "dma_device_id": "system", 00:07:05.376 "dma_device_type": 1 00:07:05.376 }, 00:07:05.376 { 00:07:05.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.376 "dma_device_type": 2 00:07:05.376 } 00:07:05.376 ], 00:07:05.376 "driver_specific": { 00:07:05.376 "raid": { 00:07:05.376 "uuid": "9008b2f8-517d-4268-a000-ae78c2d155da", 00:07:05.376 "strip_size_kb": 64, 00:07:05.376 "state": "online", 00:07:05.376 "raid_level": "concat", 00:07:05.376 "superblock": true, 00:07:05.376 "num_base_bdevs": 2, 00:07:05.376 "num_base_bdevs_discovered": 2, 00:07:05.376 "num_base_bdevs_operational": 2, 00:07:05.376 "base_bdevs_list": [ 00:07:05.376 { 00:07:05.376 "name": "BaseBdev1", 00:07:05.376 "uuid": "53911205-b068-4e38-bf9e-bd7524c5aaf4", 00:07:05.376 "is_configured": true, 00:07:05.376 "data_offset": 2048, 00:07:05.376 "data_size": 63488 00:07:05.376 }, 00:07:05.376 { 00:07:05.376 "name": "BaseBdev2", 00:07:05.376 "uuid": "6ceddcfc-e44c-4ec4-97f9-de4689184e08", 00:07:05.376 "is_configured": true, 00:07:05.376 "data_offset": 2048, 00:07:05.376 "data_size": 63488 00:07:05.376 } 00:07:05.376 ] 00:07:05.376 } 00:07:05.376 } 00:07:05.376 }' 00:07:05.376 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:05.636 BaseBdev2' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.636 01:51:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.636 [2024-12-07 01:51:10.998140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:05.636 [2024-12-07 01:51:10.998166] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.636 [2024-12-07 01:51:10.998218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.636 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.636 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:05.636 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:05.636 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.636 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.636 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.637 "name": "Existed_Raid", 00:07:05.637 "uuid": "9008b2f8-517d-4268-a000-ae78c2d155da", 00:07:05.637 "strip_size_kb": 64, 00:07:05.637 "state": "offline", 00:07:05.637 "raid_level": "concat", 00:07:05.637 "superblock": true, 00:07:05.637 "num_base_bdevs": 2, 00:07:05.637 "num_base_bdevs_discovered": 1, 00:07:05.637 "num_base_bdevs_operational": 1, 00:07:05.637 "base_bdevs_list": [ 00:07:05.637 { 00:07:05.637 "name": null, 00:07:05.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.637 "is_configured": false, 00:07:05.637 "data_offset": 0, 00:07:05.637 "data_size": 63488 00:07:05.637 }, 00:07:05.637 { 00:07:05.637 "name": "BaseBdev2", 00:07:05.637 "uuid": "6ceddcfc-e44c-4ec4-97f9-de4689184e08", 00:07:05.637 "is_configured": true, 00:07:05.637 "data_offset": 2048, 00:07:05.637 "data_size": 63488 00:07:05.637 } 00:07:05.637 ] 00:07:05.637 }' 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.637 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.207 [2024-12-07 01:51:11.476383] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:06.207 [2024-12-07 01:51:11.476434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73031 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73031 ']' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73031 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73031 00:07:06.207 killing process with pid 73031 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73031' 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73031 00:07:06.207 [2024-12-07 01:51:11.584467] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.207 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73031 00:07:06.207 [2024-12-07 01:51:11.585422] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:06.467 01:51:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:06.467 00:07:06.467 real 0m3.850s 00:07:06.467 user 0m6.044s 00:07:06.467 sys 0m0.740s 00:07:06.467 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.467 01:51:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.467 ************************************ 00:07:06.467 END TEST raid_state_function_test_sb 00:07:06.467 ************************************ 00:07:06.467 01:51:11 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:06.467 01:51:11 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:06.467 01:51:11 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.467 01:51:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:06.467 ************************************ 00:07:06.467 START TEST raid_superblock_test 00:07:06.467 ************************************ 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73267 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73267 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 73267 ']' 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.467 01:51:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.726 [2024-12-07 01:51:11.972557] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:06.726 [2024-12-07 01:51:11.972790] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73267 ] 00:07:06.726 [2024-12-07 01:51:12.097882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.726 [2024-12-07 01:51:12.143727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.726 [2024-12-07 01:51:12.185786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.986 [2024-12-07 01:51:12.185890] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.555 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.556 malloc1 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.556 [2024-12-07 01:51:12.851483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:07.556 [2024-12-07 01:51:12.851583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.556 [2024-12-07 01:51:12.851630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:07.556 [2024-12-07 01:51:12.851687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.556 [2024-12-07 01:51:12.853921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.556 [2024-12-07 01:51:12.853996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:07.556 pt1 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.556 malloc2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.556 [2024-12-07 01:51:12.896868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:07.556 [2024-12-07 01:51:12.897063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.556 [2024-12-07 01:51:12.897110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.556 [2024-12-07 01:51:12.897135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.556 [2024-12-07 01:51:12.901944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.556 [2024-12-07 01:51:12.902100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:07.556 pt2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.556 [2024-12-07 01:51:12.910396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:07.556 [2024-12-07 01:51:12.913253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:07.556 [2024-12-07 01:51:12.913527] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:07.556 [2024-12-07 01:51:12.913559] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:07.556 [2024-12-07 01:51:12.913973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:07.556 [2024-12-07 01:51:12.914187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:07.556 [2024-12-07 01:51:12.914204] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:07.556 [2024-12-07 01:51:12.914440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.556 "name": "raid_bdev1", 00:07:07.556 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:07.556 "strip_size_kb": 64, 00:07:07.556 "state": "online", 00:07:07.556 "raid_level": "concat", 00:07:07.556 "superblock": true, 00:07:07.556 "num_base_bdevs": 2, 00:07:07.556 "num_base_bdevs_discovered": 2, 00:07:07.556 "num_base_bdevs_operational": 2, 00:07:07.556 "base_bdevs_list": [ 00:07:07.556 { 00:07:07.556 "name": "pt1", 00:07:07.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:07.556 "is_configured": true, 00:07:07.556 "data_offset": 2048, 00:07:07.556 "data_size": 63488 00:07:07.556 }, 00:07:07.556 { 00:07:07.556 "name": "pt2", 00:07:07.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:07.556 "is_configured": true, 00:07:07.556 "data_offset": 2048, 00:07:07.556 "data_size": 63488 00:07:07.556 } 00:07:07.556 ] 00:07:07.556 }' 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.556 01:51:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.124 [2024-12-07 01:51:13.325990] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.124 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.124 "name": "raid_bdev1", 00:07:08.124 "aliases": [ 00:07:08.124 "1327b999-a908-44a9-b971-bb25d3e615bc" 00:07:08.124 ], 00:07:08.124 "product_name": "Raid Volume", 00:07:08.124 "block_size": 512, 00:07:08.124 "num_blocks": 126976, 00:07:08.124 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:08.124 "assigned_rate_limits": { 00:07:08.124 "rw_ios_per_sec": 0, 00:07:08.124 "rw_mbytes_per_sec": 0, 00:07:08.124 "r_mbytes_per_sec": 0, 00:07:08.125 "w_mbytes_per_sec": 0 00:07:08.125 }, 00:07:08.125 "claimed": false, 00:07:08.125 "zoned": false, 00:07:08.125 "supported_io_types": { 00:07:08.125 "read": true, 00:07:08.125 "write": true, 00:07:08.125 "unmap": true, 00:07:08.125 "flush": true, 00:07:08.125 "reset": true, 00:07:08.125 "nvme_admin": false, 00:07:08.125 "nvme_io": false, 00:07:08.125 "nvme_io_md": false, 00:07:08.125 "write_zeroes": true, 00:07:08.125 "zcopy": false, 00:07:08.125 "get_zone_info": false, 00:07:08.125 "zone_management": false, 00:07:08.125 "zone_append": false, 00:07:08.125 "compare": false, 00:07:08.125 "compare_and_write": false, 00:07:08.125 "abort": false, 00:07:08.125 "seek_hole": false, 00:07:08.125 "seek_data": false, 00:07:08.125 "copy": false, 00:07:08.125 "nvme_iov_md": false 00:07:08.125 }, 00:07:08.125 "memory_domains": [ 00:07:08.125 { 00:07:08.125 "dma_device_id": "system", 00:07:08.125 "dma_device_type": 1 00:07:08.125 }, 00:07:08.125 { 00:07:08.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.125 "dma_device_type": 2 00:07:08.125 }, 00:07:08.125 { 00:07:08.125 "dma_device_id": "system", 00:07:08.125 "dma_device_type": 1 00:07:08.125 }, 00:07:08.125 { 00:07:08.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.125 "dma_device_type": 2 00:07:08.125 } 00:07:08.125 ], 00:07:08.125 "driver_specific": { 00:07:08.125 "raid": { 00:07:08.125 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:08.125 "strip_size_kb": 64, 00:07:08.125 "state": "online", 00:07:08.125 "raid_level": "concat", 00:07:08.125 "superblock": true, 00:07:08.125 "num_base_bdevs": 2, 00:07:08.125 "num_base_bdevs_discovered": 2, 00:07:08.125 "num_base_bdevs_operational": 2, 00:07:08.125 "base_bdevs_list": [ 00:07:08.125 { 00:07:08.125 "name": "pt1", 00:07:08.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.125 "is_configured": true, 00:07:08.125 "data_offset": 2048, 00:07:08.125 "data_size": 63488 00:07:08.125 }, 00:07:08.125 { 00:07:08.125 "name": "pt2", 00:07:08.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.125 "is_configured": true, 00:07:08.125 "data_offset": 2048, 00:07:08.125 "data_size": 63488 00:07:08.125 } 00:07:08.125 ] 00:07:08.125 } 00:07:08.125 } 00:07:08.125 }' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.125 pt2' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.125 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.125 [2024-12-07 01:51:13.577435] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1327b999-a908-44a9-b971-bb25d3e615bc 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1327b999-a908-44a9-b971-bb25d3e615bc ']' 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.386 [2024-12-07 01:51:13.621112] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.386 [2024-12-07 01:51:13.621140] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.386 [2024-12-07 01:51:13.621222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.386 [2024-12-07 01:51:13.621275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.386 [2024-12-07 01:51:13.621286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.386 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.387 [2024-12-07 01:51:13.736958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:08.387 [2024-12-07 01:51:13.738829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:08.387 [2024-12-07 01:51:13.738906] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:08.387 [2024-12-07 01:51:13.738961] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:08.387 [2024-12-07 01:51:13.738980] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.387 [2024-12-07 01:51:13.738993] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:08.387 request: 00:07:08.387 { 00:07:08.387 "name": "raid_bdev1", 00:07:08.387 "raid_level": "concat", 00:07:08.387 "base_bdevs": [ 00:07:08.387 "malloc1", 00:07:08.387 "malloc2" 00:07:08.387 ], 00:07:08.387 "strip_size_kb": 64, 00:07:08.387 "superblock": false, 00:07:08.387 "method": "bdev_raid_create", 00:07:08.387 "req_id": 1 00:07:08.387 } 00:07:08.387 Got JSON-RPC error response 00:07:08.387 response: 00:07:08.387 { 00:07:08.387 "code": -17, 00:07:08.387 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:08.387 } 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.387 [2024-12-07 01:51:13.792814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:08.387 [2024-12-07 01:51:13.792907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.387 [2024-12-07 01:51:13.792946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:08.387 [2024-12-07 01:51:13.792974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.387 [2024-12-07 01:51:13.795176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.387 [2024-12-07 01:51:13.795244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:08.387 [2024-12-07 01:51:13.795341] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:08.387 [2024-12-07 01:51:13.795409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:08.387 pt1 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.387 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.647 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.647 "name": "raid_bdev1", 00:07:08.647 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:08.647 "strip_size_kb": 64, 00:07:08.647 "state": "configuring", 00:07:08.647 "raid_level": "concat", 00:07:08.647 "superblock": true, 00:07:08.647 "num_base_bdevs": 2, 00:07:08.647 "num_base_bdevs_discovered": 1, 00:07:08.647 "num_base_bdevs_operational": 2, 00:07:08.647 "base_bdevs_list": [ 00:07:08.647 { 00:07:08.647 "name": "pt1", 00:07:08.647 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.647 "is_configured": true, 00:07:08.647 "data_offset": 2048, 00:07:08.647 "data_size": 63488 00:07:08.647 }, 00:07:08.647 { 00:07:08.647 "name": null, 00:07:08.647 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.647 "is_configured": false, 00:07:08.647 "data_offset": 2048, 00:07:08.647 "data_size": 63488 00:07:08.647 } 00:07:08.647 ] 00:07:08.647 }' 00:07:08.647 01:51:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.647 01:51:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.907 [2024-12-07 01:51:14.244094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:08.907 [2024-12-07 01:51:14.244169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.907 [2024-12-07 01:51:14.244202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:08.907 [2024-12-07 01:51:14.244227] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.907 [2024-12-07 01:51:14.244628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.907 [2024-12-07 01:51:14.244651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:08.907 [2024-12-07 01:51:14.244739] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:08.907 [2024-12-07 01:51:14.244764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:08.907 [2024-12-07 01:51:14.244855] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:08.907 [2024-12-07 01:51:14.244864] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.907 [2024-12-07 01:51:14.245107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:08.907 [2024-12-07 01:51:14.245222] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:08.907 [2024-12-07 01:51:14.245236] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:08.907 [2024-12-07 01:51:14.245335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.907 pt2 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:08.907 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.908 "name": "raid_bdev1", 00:07:08.908 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:08.908 "strip_size_kb": 64, 00:07:08.908 "state": "online", 00:07:08.908 "raid_level": "concat", 00:07:08.908 "superblock": true, 00:07:08.908 "num_base_bdevs": 2, 00:07:08.908 "num_base_bdevs_discovered": 2, 00:07:08.908 "num_base_bdevs_operational": 2, 00:07:08.908 "base_bdevs_list": [ 00:07:08.908 { 00:07:08.908 "name": "pt1", 00:07:08.908 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.908 "is_configured": true, 00:07:08.908 "data_offset": 2048, 00:07:08.908 "data_size": 63488 00:07:08.908 }, 00:07:08.908 { 00:07:08.908 "name": "pt2", 00:07:08.908 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.908 "is_configured": true, 00:07:08.908 "data_offset": 2048, 00:07:08.908 "data_size": 63488 00:07:08.908 } 00:07:08.908 ] 00:07:08.908 }' 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.908 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.563 [2024-12-07 01:51:14.659701] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.563 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.563 "name": "raid_bdev1", 00:07:09.564 "aliases": [ 00:07:09.564 "1327b999-a908-44a9-b971-bb25d3e615bc" 00:07:09.564 ], 00:07:09.564 "product_name": "Raid Volume", 00:07:09.564 "block_size": 512, 00:07:09.564 "num_blocks": 126976, 00:07:09.564 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:09.564 "assigned_rate_limits": { 00:07:09.564 "rw_ios_per_sec": 0, 00:07:09.564 "rw_mbytes_per_sec": 0, 00:07:09.564 "r_mbytes_per_sec": 0, 00:07:09.564 "w_mbytes_per_sec": 0 00:07:09.564 }, 00:07:09.564 "claimed": false, 00:07:09.564 "zoned": false, 00:07:09.564 "supported_io_types": { 00:07:09.564 "read": true, 00:07:09.564 "write": true, 00:07:09.564 "unmap": true, 00:07:09.564 "flush": true, 00:07:09.564 "reset": true, 00:07:09.564 "nvme_admin": false, 00:07:09.564 "nvme_io": false, 00:07:09.564 "nvme_io_md": false, 00:07:09.564 "write_zeroes": true, 00:07:09.564 "zcopy": false, 00:07:09.564 "get_zone_info": false, 00:07:09.564 "zone_management": false, 00:07:09.564 "zone_append": false, 00:07:09.564 "compare": false, 00:07:09.564 "compare_and_write": false, 00:07:09.564 "abort": false, 00:07:09.564 "seek_hole": false, 00:07:09.564 "seek_data": false, 00:07:09.564 "copy": false, 00:07:09.564 "nvme_iov_md": false 00:07:09.564 }, 00:07:09.564 "memory_domains": [ 00:07:09.564 { 00:07:09.564 "dma_device_id": "system", 00:07:09.564 "dma_device_type": 1 00:07:09.564 }, 00:07:09.564 { 00:07:09.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.564 "dma_device_type": 2 00:07:09.564 }, 00:07:09.564 { 00:07:09.564 "dma_device_id": "system", 00:07:09.564 "dma_device_type": 1 00:07:09.564 }, 00:07:09.564 { 00:07:09.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.564 "dma_device_type": 2 00:07:09.564 } 00:07:09.564 ], 00:07:09.564 "driver_specific": { 00:07:09.564 "raid": { 00:07:09.564 "uuid": "1327b999-a908-44a9-b971-bb25d3e615bc", 00:07:09.564 "strip_size_kb": 64, 00:07:09.564 "state": "online", 00:07:09.564 "raid_level": "concat", 00:07:09.564 "superblock": true, 00:07:09.564 "num_base_bdevs": 2, 00:07:09.564 "num_base_bdevs_discovered": 2, 00:07:09.564 "num_base_bdevs_operational": 2, 00:07:09.564 "base_bdevs_list": [ 00:07:09.564 { 00:07:09.564 "name": "pt1", 00:07:09.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.564 "is_configured": true, 00:07:09.564 "data_offset": 2048, 00:07:09.564 "data_size": 63488 00:07:09.564 }, 00:07:09.564 { 00:07:09.564 "name": "pt2", 00:07:09.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.564 "is_configured": true, 00:07:09.564 "data_offset": 2048, 00:07:09.564 "data_size": 63488 00:07:09.564 } 00:07:09.564 ] 00:07:09.564 } 00:07:09.564 } 00:07:09.564 }' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.564 pt2' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.564 [2024-12-07 01:51:14.871263] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1327b999-a908-44a9-b971-bb25d3e615bc '!=' 1327b999-a908-44a9-b971-bb25d3e615bc ']' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73267 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 73267 ']' 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 73267 00:07:09.564 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73267 00:07:09.565 killing process with pid 73267 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73267' 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 73267 00:07:09.565 [2024-12-07 01:51:14.948936] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:09.565 [2024-12-07 01:51:14.949034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:09.565 [2024-12-07 01:51:14.949087] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:09.565 [2024-12-07 01:51:14.949097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:09.565 01:51:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 73267 00:07:09.565 [2024-12-07 01:51:14.971720] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:09.828 ************************************ 00:07:09.828 END TEST raid_superblock_test 00:07:09.828 ************************************ 00:07:09.828 01:51:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:09.828 00:07:09.828 real 0m3.314s 00:07:09.828 user 0m5.138s 00:07:09.828 sys 0m0.692s 00:07:09.828 01:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.829 01:51:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.829 01:51:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:09.829 01:51:15 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.829 01:51:15 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.829 01:51:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.829 ************************************ 00:07:09.829 START TEST raid_read_error_test 00:07:09.829 ************************************ 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:09.829 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZbX4haloYM 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73462 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73462 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73462 ']' 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.090 01:51:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.090 [2024-12-07 01:51:15.392758] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.090 [2024-12-07 01:51:15.393021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73462 ] 00:07:10.090 [2024-12-07 01:51:15.541687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.349 [2024-12-07 01:51:15.587248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.349 [2024-12-07 01:51:15.628775] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.349 [2024-12-07 01:51:15.628893] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 BaseBdev1_malloc 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 true 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 [2024-12-07 01:51:16.250208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:10.916 [2024-12-07 01:51:16.250264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.916 [2024-12-07 01:51:16.250307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:10.916 [2024-12-07 01:51:16.250322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.916 [2024-12-07 01:51:16.252433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.916 [2024-12-07 01:51:16.252471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:10.916 BaseBdev1 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 BaseBdev2_malloc 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 true 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 [2024-12-07 01:51:16.289534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:10.916 [2024-12-07 01:51:16.289635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:10.916 [2024-12-07 01:51:16.289678] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:10.916 [2024-12-07 01:51:16.289688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:10.916 [2024-12-07 01:51:16.291874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:10.916 [2024-12-07 01:51:16.291908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:10.916 BaseBdev2 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 [2024-12-07 01:51:16.297591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:10.916 [2024-12-07 01:51:16.299420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:10.916 [2024-12-07 01:51:16.299603] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:10.916 [2024-12-07 01:51:16.299616] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:10.916 [2024-12-07 01:51:16.299877] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:10.916 [2024-12-07 01:51:16.300044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:10.916 [2024-12-07 01:51:16.300061] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:10.916 [2024-12-07 01:51:16.300192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.916 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.917 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.917 "name": "raid_bdev1", 00:07:10.917 "uuid": "4898a468-196a-42a3-a535-430d857b924f", 00:07:10.917 "strip_size_kb": 64, 00:07:10.917 "state": "online", 00:07:10.917 "raid_level": "concat", 00:07:10.917 "superblock": true, 00:07:10.917 "num_base_bdevs": 2, 00:07:10.917 "num_base_bdevs_discovered": 2, 00:07:10.917 "num_base_bdevs_operational": 2, 00:07:10.917 "base_bdevs_list": [ 00:07:10.917 { 00:07:10.917 "name": "BaseBdev1", 00:07:10.917 "uuid": "58c58580-c040-58dd-b148-9fc3a05b4ba2", 00:07:10.917 "is_configured": true, 00:07:10.917 "data_offset": 2048, 00:07:10.917 "data_size": 63488 00:07:10.917 }, 00:07:10.917 { 00:07:10.917 "name": "BaseBdev2", 00:07:10.917 "uuid": "f029bd46-8e6d-5177-9252-2a8522b38e8a", 00:07:10.917 "is_configured": true, 00:07:10.917 "data_offset": 2048, 00:07:10.917 "data_size": 63488 00:07:10.917 } 00:07:10.917 ] 00:07:10.917 }' 00:07:10.917 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.917 01:51:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.485 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:11.485 01:51:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:11.485 [2024-12-07 01:51:16.825078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.424 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.424 "name": "raid_bdev1", 00:07:12.424 "uuid": "4898a468-196a-42a3-a535-430d857b924f", 00:07:12.425 "strip_size_kb": 64, 00:07:12.425 "state": "online", 00:07:12.425 "raid_level": "concat", 00:07:12.425 "superblock": true, 00:07:12.425 "num_base_bdevs": 2, 00:07:12.425 "num_base_bdevs_discovered": 2, 00:07:12.425 "num_base_bdevs_operational": 2, 00:07:12.425 "base_bdevs_list": [ 00:07:12.425 { 00:07:12.425 "name": "BaseBdev1", 00:07:12.425 "uuid": "58c58580-c040-58dd-b148-9fc3a05b4ba2", 00:07:12.425 "is_configured": true, 00:07:12.425 "data_offset": 2048, 00:07:12.425 "data_size": 63488 00:07:12.425 }, 00:07:12.425 { 00:07:12.425 "name": "BaseBdev2", 00:07:12.425 "uuid": "f029bd46-8e6d-5177-9252-2a8522b38e8a", 00:07:12.425 "is_configured": true, 00:07:12.425 "data_offset": 2048, 00:07:12.425 "data_size": 63488 00:07:12.425 } 00:07:12.425 ] 00:07:12.425 }' 00:07:12.425 01:51:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.425 01:51:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.994 [2024-12-07 01:51:18.164357] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:12.994 [2024-12-07 01:51:18.164438] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:12.994 [2024-12-07 01:51:18.166904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:12.994 [2024-12-07 01:51:18.166983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.994 [2024-12-07 01:51:18.167034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:12.994 [2024-12-07 01:51:18.167073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:12.994 { 00:07:12.994 "results": [ 00:07:12.994 { 00:07:12.994 "job": "raid_bdev1", 00:07:12.994 "core_mask": "0x1", 00:07:12.994 "workload": "randrw", 00:07:12.994 "percentage": 50, 00:07:12.994 "status": "finished", 00:07:12.994 "queue_depth": 1, 00:07:12.994 "io_size": 131072, 00:07:12.994 "runtime": 1.340123, 00:07:12.994 "iops": 17702.106448437942, 00:07:12.994 "mibps": 2212.7633060547428, 00:07:12.994 "io_failed": 1, 00:07:12.994 "io_timeout": 0, 00:07:12.994 "avg_latency_us": 78.077840434281, 00:07:12.994 "min_latency_us": 24.482096069868994, 00:07:12.994 "max_latency_us": 1337.907423580786 00:07:12.994 } 00:07:12.994 ], 00:07:12.994 "core_count": 1 00:07:12.994 } 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73462 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73462 ']' 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73462 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73462 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73462' 00:07:12.994 killing process with pid 73462 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73462 00:07:12.994 [2024-12-07 01:51:18.202481] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73462 00:07:12.994 [2024-12-07 01:51:18.217525] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZbX4haloYM 00:07:12.994 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:13.254 ************************************ 00:07:13.254 END TEST raid_read_error_test 00:07:13.254 ************************************ 00:07:13.254 00:07:13.254 real 0m3.189s 00:07:13.254 user 0m4.038s 00:07:13.254 sys 0m0.508s 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.254 01:51:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 01:51:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:13.254 01:51:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:13.254 01:51:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.254 01:51:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 ************************************ 00:07:13.254 START TEST raid_write_error_test 00:07:13.254 ************************************ 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d8S1bBhR07 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73596 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73596 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73596 ']' 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.254 01:51:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.254 [2024-12-07 01:51:18.629456] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:13.254 [2024-12-07 01:51:18.629632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73596 ] 00:07:13.513 [2024-12-07 01:51:18.773653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.513 [2024-12-07 01:51:18.817104] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.513 [2024-12-07 01:51:18.858313] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.513 [2024-12-07 01:51:18.858427] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.080 BaseBdev1_malloc 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.080 true 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.080 [2024-12-07 01:51:19.524089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:14.080 [2024-12-07 01:51:19.524145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.080 [2024-12-07 01:51:19.524177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:14.080 [2024-12-07 01:51:19.524192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.080 [2024-12-07 01:51:19.526309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.080 [2024-12-07 01:51:19.526351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:14.080 BaseBdev1 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.080 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 BaseBdev2_malloc 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 true 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 [2024-12-07 01:51:19.571575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:14.338 [2024-12-07 01:51:19.571693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.338 [2024-12-07 01:51:19.571719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:14.338 [2024-12-07 01:51:19.571728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.338 [2024-12-07 01:51:19.573861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.338 [2024-12-07 01:51:19.573891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:14.338 BaseBdev2 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 [2024-12-07 01:51:19.583643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.338 [2024-12-07 01:51:19.585522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:14.338 [2024-12-07 01:51:19.585704] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:14.338 [2024-12-07 01:51:19.585718] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.338 [2024-12-07 01:51:19.585958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:14.338 [2024-12-07 01:51:19.586090] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:14.338 [2024-12-07 01:51:19.586128] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:14.338 [2024-12-07 01:51:19.586257] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.338 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.338 "name": "raid_bdev1", 00:07:14.338 "uuid": "8864e292-5eae-4384-a0be-a1b8b8d42e81", 00:07:14.338 "strip_size_kb": 64, 00:07:14.338 "state": "online", 00:07:14.338 "raid_level": "concat", 00:07:14.338 "superblock": true, 00:07:14.338 "num_base_bdevs": 2, 00:07:14.338 "num_base_bdevs_discovered": 2, 00:07:14.338 "num_base_bdevs_operational": 2, 00:07:14.338 "base_bdevs_list": [ 00:07:14.338 { 00:07:14.338 "name": "BaseBdev1", 00:07:14.339 "uuid": "fc7aea79-286a-51b4-a094-55bcc8072143", 00:07:14.339 "is_configured": true, 00:07:14.339 "data_offset": 2048, 00:07:14.339 "data_size": 63488 00:07:14.339 }, 00:07:14.339 { 00:07:14.339 "name": "BaseBdev2", 00:07:14.339 "uuid": "2eb8a003-72f6-5019-8d11-ec465ad67894", 00:07:14.339 "is_configured": true, 00:07:14.339 "data_offset": 2048, 00:07:14.339 "data_size": 63488 00:07:14.339 } 00:07:14.339 ] 00:07:14.339 }' 00:07:14.339 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.339 01:51:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.597 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:14.597 01:51:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:14.856 [2024-12-07 01:51:20.095205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.793 "name": "raid_bdev1", 00:07:15.793 "uuid": "8864e292-5eae-4384-a0be-a1b8b8d42e81", 00:07:15.793 "strip_size_kb": 64, 00:07:15.793 "state": "online", 00:07:15.793 "raid_level": "concat", 00:07:15.793 "superblock": true, 00:07:15.793 "num_base_bdevs": 2, 00:07:15.793 "num_base_bdevs_discovered": 2, 00:07:15.793 "num_base_bdevs_operational": 2, 00:07:15.793 "base_bdevs_list": [ 00:07:15.793 { 00:07:15.793 "name": "BaseBdev1", 00:07:15.793 "uuid": "fc7aea79-286a-51b4-a094-55bcc8072143", 00:07:15.793 "is_configured": true, 00:07:15.793 "data_offset": 2048, 00:07:15.793 "data_size": 63488 00:07:15.793 }, 00:07:15.793 { 00:07:15.793 "name": "BaseBdev2", 00:07:15.793 "uuid": "2eb8a003-72f6-5019-8d11-ec465ad67894", 00:07:15.793 "is_configured": true, 00:07:15.793 "data_offset": 2048, 00:07:15.793 "data_size": 63488 00:07:15.793 } 00:07:15.793 ] 00:07:15.793 }' 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.793 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.052 [2024-12-07 01:51:21.446746] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.052 [2024-12-07 01:51:21.446776] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.052 [2024-12-07 01:51:21.449280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.052 [2024-12-07 01:51:21.449330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.052 [2024-12-07 01:51:21.449362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.052 [2024-12-07 01:51:21.449371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:16.052 { 00:07:16.052 "results": [ 00:07:16.052 { 00:07:16.052 "job": "raid_bdev1", 00:07:16.052 "core_mask": "0x1", 00:07:16.052 "workload": "randrw", 00:07:16.052 "percentage": 50, 00:07:16.052 "status": "finished", 00:07:16.052 "queue_depth": 1, 00:07:16.052 "io_size": 131072, 00:07:16.052 "runtime": 1.352314, 00:07:16.052 "iops": 17575.79970332334, 00:07:16.052 "mibps": 2196.9749629154176, 00:07:16.052 "io_failed": 1, 00:07:16.052 "io_timeout": 0, 00:07:16.052 "avg_latency_us": 78.66130839754766, 00:07:16.052 "min_latency_us": 24.593886462882097, 00:07:16.052 "max_latency_us": 1502.46288209607 00:07:16.052 } 00:07:16.052 ], 00:07:16.052 "core_count": 1 00:07:16.052 } 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73596 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73596 ']' 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73596 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73596 00:07:16.052 killing process with pid 73596 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73596' 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73596 00:07:16.052 [2024-12-07 01:51:21.493654] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.052 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73596 00:07:16.052 [2024-12-07 01:51:21.508465] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d8S1bBhR07 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:16.311 ************************************ 00:07:16.311 END TEST raid_write_error_test 00:07:16.311 ************************************ 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:16.311 00:07:16.311 real 0m3.226s 00:07:16.311 user 0m4.111s 00:07:16.311 sys 0m0.499s 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.311 01:51:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.569 01:51:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:16.569 01:51:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:16.569 01:51:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:16.569 01:51:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.569 01:51:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.569 ************************************ 00:07:16.569 START TEST raid_state_function_test 00:07:16.569 ************************************ 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73723 00:07:16.569 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73723' 00:07:16.570 Process raid pid: 73723 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73723 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73723 ']' 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.570 01:51:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.570 [2024-12-07 01:51:21.911949] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:16.570 [2024-12-07 01:51:21.912168] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.829 [2024-12-07 01:51:22.058539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.829 [2024-12-07 01:51:22.103046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.829 [2024-12-07 01:51:22.144822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.829 [2024-12-07 01:51:22.144931] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 [2024-12-07 01:51:22.738715] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.399 [2024-12-07 01:51:22.738856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.399 [2024-12-07 01:51:22.738908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.399 [2024-12-07 01:51:22.738935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.399 "name": "Existed_Raid", 00:07:17.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.399 "strip_size_kb": 0, 00:07:17.399 "state": "configuring", 00:07:17.399 "raid_level": "raid1", 00:07:17.399 "superblock": false, 00:07:17.399 "num_base_bdevs": 2, 00:07:17.399 "num_base_bdevs_discovered": 0, 00:07:17.399 "num_base_bdevs_operational": 2, 00:07:17.399 "base_bdevs_list": [ 00:07:17.399 { 00:07:17.399 "name": "BaseBdev1", 00:07:17.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.399 "is_configured": false, 00:07:17.399 "data_offset": 0, 00:07:17.399 "data_size": 0 00:07:17.399 }, 00:07:17.399 { 00:07:17.399 "name": "BaseBdev2", 00:07:17.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.399 "is_configured": false, 00:07:17.399 "data_offset": 0, 00:07:17.399 "data_size": 0 00:07:17.399 } 00:07:17.399 ] 00:07:17.399 }' 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.399 01:51:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.659 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:17.659 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.659 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 [2024-12-07 01:51:23.121961] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:17.919 [2024-12-07 01:51:23.122013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 [2024-12-07 01:51:23.133945] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:17.919 [2024-12-07 01:51:23.133986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:17.919 [2024-12-07 01:51:23.134003] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:17.919 [2024-12-07 01:51:23.134013] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 [2024-12-07 01:51:23.154623] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.919 BaseBdev1 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:17.919 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 [ 00:07:17.920 { 00:07:17.920 "name": "BaseBdev1", 00:07:17.920 "aliases": [ 00:07:17.920 "bfb6b0de-daa0-4e94-b53b-2803f9b7ddde" 00:07:17.920 ], 00:07:17.920 "product_name": "Malloc disk", 00:07:17.920 "block_size": 512, 00:07:17.920 "num_blocks": 65536, 00:07:17.920 "uuid": "bfb6b0de-daa0-4e94-b53b-2803f9b7ddde", 00:07:17.920 "assigned_rate_limits": { 00:07:17.920 "rw_ios_per_sec": 0, 00:07:17.920 "rw_mbytes_per_sec": 0, 00:07:17.920 "r_mbytes_per_sec": 0, 00:07:17.920 "w_mbytes_per_sec": 0 00:07:17.920 }, 00:07:17.920 "claimed": true, 00:07:17.920 "claim_type": "exclusive_write", 00:07:17.920 "zoned": false, 00:07:17.920 "supported_io_types": { 00:07:17.920 "read": true, 00:07:17.920 "write": true, 00:07:17.920 "unmap": true, 00:07:17.920 "flush": true, 00:07:17.920 "reset": true, 00:07:17.920 "nvme_admin": false, 00:07:17.920 "nvme_io": false, 00:07:17.920 "nvme_io_md": false, 00:07:17.920 "write_zeroes": true, 00:07:17.920 "zcopy": true, 00:07:17.920 "get_zone_info": false, 00:07:17.920 "zone_management": false, 00:07:17.920 "zone_append": false, 00:07:17.920 "compare": false, 00:07:17.920 "compare_and_write": false, 00:07:17.920 "abort": true, 00:07:17.920 "seek_hole": false, 00:07:17.920 "seek_data": false, 00:07:17.920 "copy": true, 00:07:17.920 "nvme_iov_md": false 00:07:17.920 }, 00:07:17.920 "memory_domains": [ 00:07:17.920 { 00:07:17.920 "dma_device_id": "system", 00:07:17.920 "dma_device_type": 1 00:07:17.920 }, 00:07:17.920 { 00:07:17.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.920 "dma_device_type": 2 00:07:17.920 } 00:07:17.920 ], 00:07:17.920 "driver_specific": {} 00:07:17.920 } 00:07:17.920 ] 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.920 "name": "Existed_Raid", 00:07:17.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.920 "strip_size_kb": 0, 00:07:17.920 "state": "configuring", 00:07:17.920 "raid_level": "raid1", 00:07:17.920 "superblock": false, 00:07:17.920 "num_base_bdevs": 2, 00:07:17.920 "num_base_bdevs_discovered": 1, 00:07:17.920 "num_base_bdevs_operational": 2, 00:07:17.920 "base_bdevs_list": [ 00:07:17.920 { 00:07:17.920 "name": "BaseBdev1", 00:07:17.920 "uuid": "bfb6b0de-daa0-4e94-b53b-2803f9b7ddde", 00:07:17.920 "is_configured": true, 00:07:17.920 "data_offset": 0, 00:07:17.920 "data_size": 65536 00:07:17.920 }, 00:07:17.920 { 00:07:17.920 "name": "BaseBdev2", 00:07:17.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.920 "is_configured": false, 00:07:17.920 "data_offset": 0, 00:07:17.920 "data_size": 0 00:07:17.920 } 00:07:17.920 ] 00:07:17.920 }' 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.920 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 [2024-12-07 01:51:23.601863] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.180 [2024-12-07 01:51:23.601916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.180 [2024-12-07 01:51:23.613880] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.180 [2024-12-07 01:51:23.615745] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.180 [2024-12-07 01:51:23.615786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.180 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.441 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.441 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.441 "name": "Existed_Raid", 00:07:18.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.441 "strip_size_kb": 0, 00:07:18.441 "state": "configuring", 00:07:18.441 "raid_level": "raid1", 00:07:18.441 "superblock": false, 00:07:18.441 "num_base_bdevs": 2, 00:07:18.441 "num_base_bdevs_discovered": 1, 00:07:18.441 "num_base_bdevs_operational": 2, 00:07:18.441 "base_bdevs_list": [ 00:07:18.441 { 00:07:18.441 "name": "BaseBdev1", 00:07:18.441 "uuid": "bfb6b0de-daa0-4e94-b53b-2803f9b7ddde", 00:07:18.441 "is_configured": true, 00:07:18.441 "data_offset": 0, 00:07:18.441 "data_size": 65536 00:07:18.441 }, 00:07:18.441 { 00:07:18.441 "name": "BaseBdev2", 00:07:18.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.441 "is_configured": false, 00:07:18.441 "data_offset": 0, 00:07:18.441 "data_size": 0 00:07:18.441 } 00:07:18.441 ] 00:07:18.441 }' 00:07:18.441 01:51:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.441 01:51:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.702 [2024-12-07 01:51:24.033384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:18.702 [2024-12-07 01:51:24.033729] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:18.702 [2024-12-07 01:51:24.033836] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:18.702 [2024-12-07 01:51:24.034937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:18.702 [2024-12-07 01:51:24.035471] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:18.702 [2024-12-07 01:51:24.035612] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:18.702 [2024-12-07 01:51:24.036277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.702 BaseBdev2 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.702 [ 00:07:18.702 { 00:07:18.702 "name": "BaseBdev2", 00:07:18.702 "aliases": [ 00:07:18.702 "32fe39e9-5d0b-4ced-b9a5-f1ffcbdc609f" 00:07:18.702 ], 00:07:18.702 "product_name": "Malloc disk", 00:07:18.702 "block_size": 512, 00:07:18.702 "num_blocks": 65536, 00:07:18.702 "uuid": "32fe39e9-5d0b-4ced-b9a5-f1ffcbdc609f", 00:07:18.702 "assigned_rate_limits": { 00:07:18.702 "rw_ios_per_sec": 0, 00:07:18.702 "rw_mbytes_per_sec": 0, 00:07:18.702 "r_mbytes_per_sec": 0, 00:07:18.702 "w_mbytes_per_sec": 0 00:07:18.702 }, 00:07:18.702 "claimed": true, 00:07:18.702 "claim_type": "exclusive_write", 00:07:18.702 "zoned": false, 00:07:18.702 "supported_io_types": { 00:07:18.702 "read": true, 00:07:18.702 "write": true, 00:07:18.702 "unmap": true, 00:07:18.702 "flush": true, 00:07:18.702 "reset": true, 00:07:18.702 "nvme_admin": false, 00:07:18.702 "nvme_io": false, 00:07:18.702 "nvme_io_md": false, 00:07:18.702 "write_zeroes": true, 00:07:18.702 "zcopy": true, 00:07:18.702 "get_zone_info": false, 00:07:18.702 "zone_management": false, 00:07:18.702 "zone_append": false, 00:07:18.702 "compare": false, 00:07:18.702 "compare_and_write": false, 00:07:18.702 "abort": true, 00:07:18.702 "seek_hole": false, 00:07:18.702 "seek_data": false, 00:07:18.702 "copy": true, 00:07:18.702 "nvme_iov_md": false 00:07:18.702 }, 00:07:18.702 "memory_domains": [ 00:07:18.702 { 00:07:18.702 "dma_device_id": "system", 00:07:18.702 "dma_device_type": 1 00:07:18.702 }, 00:07:18.702 { 00:07:18.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.702 "dma_device_type": 2 00:07:18.702 } 00:07:18.702 ], 00:07:18.702 "driver_specific": {} 00:07:18.702 } 00:07:18.702 ] 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:18.702 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.703 "name": "Existed_Raid", 00:07:18.703 "uuid": "35c32541-a4ed-4468-9d13-95f8bd7b84fb", 00:07:18.703 "strip_size_kb": 0, 00:07:18.703 "state": "online", 00:07:18.703 "raid_level": "raid1", 00:07:18.703 "superblock": false, 00:07:18.703 "num_base_bdevs": 2, 00:07:18.703 "num_base_bdevs_discovered": 2, 00:07:18.703 "num_base_bdevs_operational": 2, 00:07:18.703 "base_bdevs_list": [ 00:07:18.703 { 00:07:18.703 "name": "BaseBdev1", 00:07:18.703 "uuid": "bfb6b0de-daa0-4e94-b53b-2803f9b7ddde", 00:07:18.703 "is_configured": true, 00:07:18.703 "data_offset": 0, 00:07:18.703 "data_size": 65536 00:07:18.703 }, 00:07:18.703 { 00:07:18.703 "name": "BaseBdev2", 00:07:18.703 "uuid": "32fe39e9-5d0b-4ced-b9a5-f1ffcbdc609f", 00:07:18.703 "is_configured": true, 00:07:18.703 "data_offset": 0, 00:07:18.703 "data_size": 65536 00:07:18.703 } 00:07:18.703 ] 00:07:18.703 }' 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.703 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:19.276 [2024-12-07 01:51:24.512789] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:19.276 "name": "Existed_Raid", 00:07:19.276 "aliases": [ 00:07:19.276 "35c32541-a4ed-4468-9d13-95f8bd7b84fb" 00:07:19.276 ], 00:07:19.276 "product_name": "Raid Volume", 00:07:19.276 "block_size": 512, 00:07:19.276 "num_blocks": 65536, 00:07:19.276 "uuid": "35c32541-a4ed-4468-9d13-95f8bd7b84fb", 00:07:19.276 "assigned_rate_limits": { 00:07:19.276 "rw_ios_per_sec": 0, 00:07:19.276 "rw_mbytes_per_sec": 0, 00:07:19.276 "r_mbytes_per_sec": 0, 00:07:19.276 "w_mbytes_per_sec": 0 00:07:19.276 }, 00:07:19.276 "claimed": false, 00:07:19.276 "zoned": false, 00:07:19.276 "supported_io_types": { 00:07:19.276 "read": true, 00:07:19.276 "write": true, 00:07:19.276 "unmap": false, 00:07:19.276 "flush": false, 00:07:19.276 "reset": true, 00:07:19.276 "nvme_admin": false, 00:07:19.276 "nvme_io": false, 00:07:19.276 "nvme_io_md": false, 00:07:19.276 "write_zeroes": true, 00:07:19.276 "zcopy": false, 00:07:19.276 "get_zone_info": false, 00:07:19.276 "zone_management": false, 00:07:19.276 "zone_append": false, 00:07:19.276 "compare": false, 00:07:19.276 "compare_and_write": false, 00:07:19.276 "abort": false, 00:07:19.276 "seek_hole": false, 00:07:19.276 "seek_data": false, 00:07:19.276 "copy": false, 00:07:19.276 "nvme_iov_md": false 00:07:19.276 }, 00:07:19.276 "memory_domains": [ 00:07:19.276 { 00:07:19.276 "dma_device_id": "system", 00:07:19.276 "dma_device_type": 1 00:07:19.276 }, 00:07:19.276 { 00:07:19.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.276 "dma_device_type": 2 00:07:19.276 }, 00:07:19.276 { 00:07:19.276 "dma_device_id": "system", 00:07:19.276 "dma_device_type": 1 00:07:19.276 }, 00:07:19.276 { 00:07:19.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.276 "dma_device_type": 2 00:07:19.276 } 00:07:19.276 ], 00:07:19.276 "driver_specific": { 00:07:19.276 "raid": { 00:07:19.276 "uuid": "35c32541-a4ed-4468-9d13-95f8bd7b84fb", 00:07:19.276 "strip_size_kb": 0, 00:07:19.276 "state": "online", 00:07:19.276 "raid_level": "raid1", 00:07:19.276 "superblock": false, 00:07:19.276 "num_base_bdevs": 2, 00:07:19.276 "num_base_bdevs_discovered": 2, 00:07:19.276 "num_base_bdevs_operational": 2, 00:07:19.276 "base_bdevs_list": [ 00:07:19.276 { 00:07:19.276 "name": "BaseBdev1", 00:07:19.276 "uuid": "bfb6b0de-daa0-4e94-b53b-2803f9b7ddde", 00:07:19.276 "is_configured": true, 00:07:19.276 "data_offset": 0, 00:07:19.276 "data_size": 65536 00:07:19.276 }, 00:07:19.276 { 00:07:19.276 "name": "BaseBdev2", 00:07:19.276 "uuid": "32fe39e9-5d0b-4ced-b9a5-f1ffcbdc609f", 00:07:19.276 "is_configured": true, 00:07:19.276 "data_offset": 0, 00:07:19.276 "data_size": 65536 00:07:19.276 } 00:07:19.276 ] 00:07:19.276 } 00:07:19.276 } 00:07:19.276 }' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:19.276 BaseBdev2' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.276 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.276 [2024-12-07 01:51:24.724173] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.537 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.537 "name": "Existed_Raid", 00:07:19.537 "uuid": "35c32541-a4ed-4468-9d13-95f8bd7b84fb", 00:07:19.538 "strip_size_kb": 0, 00:07:19.538 "state": "online", 00:07:19.538 "raid_level": "raid1", 00:07:19.538 "superblock": false, 00:07:19.538 "num_base_bdevs": 2, 00:07:19.538 "num_base_bdevs_discovered": 1, 00:07:19.538 "num_base_bdevs_operational": 1, 00:07:19.538 "base_bdevs_list": [ 00:07:19.538 { 00:07:19.538 "name": null, 00:07:19.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.538 "is_configured": false, 00:07:19.538 "data_offset": 0, 00:07:19.538 "data_size": 65536 00:07:19.538 }, 00:07:19.538 { 00:07:19.538 "name": "BaseBdev2", 00:07:19.538 "uuid": "32fe39e9-5d0b-4ced-b9a5-f1ffcbdc609f", 00:07:19.538 "is_configured": true, 00:07:19.538 "data_offset": 0, 00:07:19.538 "data_size": 65536 00:07:19.538 } 00:07:19.538 ] 00:07:19.538 }' 00:07:19.538 01:51:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.538 01:51:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 [2024-12-07 01:51:25.134673] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:19.800 [2024-12-07 01:51:25.134766] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:19.800 [2024-12-07 01:51:25.146061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.800 [2024-12-07 01:51:25.146110] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.800 [2024-12-07 01:51:25.146121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73723 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73723 ']' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73723 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73723 00:07:19.800 killing process with pid 73723 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73723' 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73723 00:07:19.800 [2024-12-07 01:51:25.227111] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.800 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73723 00:07:19.800 [2024-12-07 01:51:25.228117] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.060 01:51:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:20.060 00:07:20.060 real 0m3.643s 00:07:20.060 user 0m5.668s 00:07:20.060 sys 0m0.706s 00:07:20.060 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.060 01:51:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.060 ************************************ 00:07:20.060 END TEST raid_state_function_test 00:07:20.060 ************************************ 00:07:20.320 01:51:25 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:20.320 01:51:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:20.320 01:51:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.320 01:51:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.320 ************************************ 00:07:20.320 START TEST raid_state_function_test_sb 00:07:20.320 ************************************ 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73960 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73960' 00:07:20.320 Process raid pid: 73960 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73960 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 73960 ']' 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.320 01:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.320 [2024-12-07 01:51:25.647865] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.320 [2024-12-07 01:51:25.648171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.580 [2024-12-07 01:51:25.800312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.580 [2024-12-07 01:51:25.844360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.580 [2024-12-07 01:51:25.885557] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.580 [2024-12-07 01:51:25.885588] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.150 [2024-12-07 01:51:26.486261] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.150 [2024-12-07 01:51:26.486407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.150 [2024-12-07 01:51:26.486524] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.150 [2024-12-07 01:51:26.486537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.150 "name": "Existed_Raid", 00:07:21.150 "uuid": "3e573136-c8e7-4dfb-95a5-98a2eb1fd7f3", 00:07:21.150 "strip_size_kb": 0, 00:07:21.150 "state": "configuring", 00:07:21.150 "raid_level": "raid1", 00:07:21.150 "superblock": true, 00:07:21.150 "num_base_bdevs": 2, 00:07:21.150 "num_base_bdevs_discovered": 0, 00:07:21.150 "num_base_bdevs_operational": 2, 00:07:21.150 "base_bdevs_list": [ 00:07:21.150 { 00:07:21.150 "name": "BaseBdev1", 00:07:21.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.150 "is_configured": false, 00:07:21.150 "data_offset": 0, 00:07:21.150 "data_size": 0 00:07:21.150 }, 00:07:21.150 { 00:07:21.150 "name": "BaseBdev2", 00:07:21.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.150 "is_configured": false, 00:07:21.150 "data_offset": 0, 00:07:21.150 "data_size": 0 00:07:21.150 } 00:07:21.150 ] 00:07:21.150 }' 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.150 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.721 [2024-12-07 01:51:26.893473] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.721 [2024-12-07 01:51:26.893596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.721 [2024-12-07 01:51:26.905460] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.721 [2024-12-07 01:51:26.905541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.721 [2024-12-07 01:51:26.905578] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.721 [2024-12-07 01:51:26.905601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.721 [2024-12-07 01:51:26.926040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.721 BaseBdev1 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.721 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.722 [ 00:07:21.722 { 00:07:21.722 "name": "BaseBdev1", 00:07:21.722 "aliases": [ 00:07:21.722 "44e27d2a-04a1-49f7-bad6-f99b965dfe08" 00:07:21.722 ], 00:07:21.722 "product_name": "Malloc disk", 00:07:21.722 "block_size": 512, 00:07:21.722 "num_blocks": 65536, 00:07:21.722 "uuid": "44e27d2a-04a1-49f7-bad6-f99b965dfe08", 00:07:21.722 "assigned_rate_limits": { 00:07:21.722 "rw_ios_per_sec": 0, 00:07:21.722 "rw_mbytes_per_sec": 0, 00:07:21.722 "r_mbytes_per_sec": 0, 00:07:21.722 "w_mbytes_per_sec": 0 00:07:21.722 }, 00:07:21.722 "claimed": true, 00:07:21.722 "claim_type": "exclusive_write", 00:07:21.722 "zoned": false, 00:07:21.722 "supported_io_types": { 00:07:21.722 "read": true, 00:07:21.722 "write": true, 00:07:21.722 "unmap": true, 00:07:21.722 "flush": true, 00:07:21.722 "reset": true, 00:07:21.722 "nvme_admin": false, 00:07:21.722 "nvme_io": false, 00:07:21.722 "nvme_io_md": false, 00:07:21.722 "write_zeroes": true, 00:07:21.722 "zcopy": true, 00:07:21.722 "get_zone_info": false, 00:07:21.722 "zone_management": false, 00:07:21.722 "zone_append": false, 00:07:21.722 "compare": false, 00:07:21.722 "compare_and_write": false, 00:07:21.722 "abort": true, 00:07:21.722 "seek_hole": false, 00:07:21.722 "seek_data": false, 00:07:21.722 "copy": true, 00:07:21.722 "nvme_iov_md": false 00:07:21.722 }, 00:07:21.722 "memory_domains": [ 00:07:21.722 { 00:07:21.722 "dma_device_id": "system", 00:07:21.722 "dma_device_type": 1 00:07:21.722 }, 00:07:21.722 { 00:07:21.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.722 "dma_device_type": 2 00:07:21.722 } 00:07:21.722 ], 00:07:21.722 "driver_specific": {} 00:07:21.722 } 00:07:21.722 ] 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.722 01:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.722 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.722 "name": "Existed_Raid", 00:07:21.722 "uuid": "cc6cd46f-4979-4485-a9cf-a7f3ae80e73d", 00:07:21.722 "strip_size_kb": 0, 00:07:21.722 "state": "configuring", 00:07:21.722 "raid_level": "raid1", 00:07:21.722 "superblock": true, 00:07:21.722 "num_base_bdevs": 2, 00:07:21.722 "num_base_bdevs_discovered": 1, 00:07:21.722 "num_base_bdevs_operational": 2, 00:07:21.722 "base_bdevs_list": [ 00:07:21.722 { 00:07:21.722 "name": "BaseBdev1", 00:07:21.722 "uuid": "44e27d2a-04a1-49f7-bad6-f99b965dfe08", 00:07:21.722 "is_configured": true, 00:07:21.722 "data_offset": 2048, 00:07:21.722 "data_size": 63488 00:07:21.722 }, 00:07:21.722 { 00:07:21.722 "name": "BaseBdev2", 00:07:21.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.722 "is_configured": false, 00:07:21.722 "data_offset": 0, 00:07:21.722 "data_size": 0 00:07:21.722 } 00:07:21.722 ] 00:07:21.722 }' 00:07:21.722 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.722 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 [2024-12-07 01:51:27.377310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.982 [2024-12-07 01:51:27.377432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 [2024-12-07 01:51:27.389332] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.982 [2024-12-07 01:51:27.391159] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.982 [2024-12-07 01:51:27.391233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.982 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.983 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.245 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.245 "name": "Existed_Raid", 00:07:22.245 "uuid": "02a1788c-fbc7-4872-9f20-a124377a1c82", 00:07:22.245 "strip_size_kb": 0, 00:07:22.245 "state": "configuring", 00:07:22.245 "raid_level": "raid1", 00:07:22.245 "superblock": true, 00:07:22.245 "num_base_bdevs": 2, 00:07:22.245 "num_base_bdevs_discovered": 1, 00:07:22.245 "num_base_bdevs_operational": 2, 00:07:22.245 "base_bdevs_list": [ 00:07:22.245 { 00:07:22.245 "name": "BaseBdev1", 00:07:22.245 "uuid": "44e27d2a-04a1-49f7-bad6-f99b965dfe08", 00:07:22.245 "is_configured": true, 00:07:22.245 "data_offset": 2048, 00:07:22.245 "data_size": 63488 00:07:22.245 }, 00:07:22.245 { 00:07:22.245 "name": "BaseBdev2", 00:07:22.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.245 "is_configured": false, 00:07:22.245 "data_offset": 0, 00:07:22.245 "data_size": 0 00:07:22.245 } 00:07:22.245 ] 00:07:22.245 }' 00:07:22.245 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.245 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.507 [2024-12-07 01:51:27.835967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.507 [2024-12-07 01:51:27.836293] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:22.507 [2024-12-07 01:51:27.836352] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:22.507 [2024-12-07 01:51:27.836650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:22.507 [2024-12-07 01:51:27.836841] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:22.507 [2024-12-07 01:51:27.836896] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:07:22.507 id_bdev 0x617000001900 00:07:22.507 [2024-12-07 01:51:27.837052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.507 [ 00:07:22.507 { 00:07:22.507 "name": "BaseBdev2", 00:07:22.507 "aliases": [ 00:07:22.507 "1e393e7d-bde7-4fe7-95dc-80809c201a07" 00:07:22.507 ], 00:07:22.507 "product_name": "Malloc disk", 00:07:22.507 "block_size": 512, 00:07:22.507 "num_blocks": 65536, 00:07:22.507 "uuid": "1e393e7d-bde7-4fe7-95dc-80809c201a07", 00:07:22.507 "assigned_rate_limits": { 00:07:22.507 "rw_ios_per_sec": 0, 00:07:22.507 "rw_mbytes_per_sec": 0, 00:07:22.507 "r_mbytes_per_sec": 0, 00:07:22.507 "w_mbytes_per_sec": 0 00:07:22.507 }, 00:07:22.507 "claimed": true, 00:07:22.507 "claim_type": "exclusive_write", 00:07:22.507 "zoned": false, 00:07:22.507 "supported_io_types": { 00:07:22.507 "read": true, 00:07:22.507 "write": true, 00:07:22.507 "unmap": true, 00:07:22.507 "flush": true, 00:07:22.507 "reset": true, 00:07:22.507 "nvme_admin": false, 00:07:22.507 "nvme_io": false, 00:07:22.507 "nvme_io_md": false, 00:07:22.507 "write_zeroes": true, 00:07:22.507 "zcopy": true, 00:07:22.507 "get_zone_info": false, 00:07:22.507 "zone_management": false, 00:07:22.507 "zone_append": false, 00:07:22.507 "compare": false, 00:07:22.507 "compare_and_write": false, 00:07:22.507 "abort": true, 00:07:22.507 "seek_hole": false, 00:07:22.507 "seek_data": false, 00:07:22.507 "copy": true, 00:07:22.507 "nvme_iov_md": false 00:07:22.507 }, 00:07:22.507 "memory_domains": [ 00:07:22.507 { 00:07:22.507 "dma_device_id": "system", 00:07:22.507 "dma_device_type": 1 00:07:22.507 }, 00:07:22.507 { 00:07:22.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.507 "dma_device_type": 2 00:07:22.507 } 00:07:22.507 ], 00:07:22.507 "driver_specific": {} 00:07:22.507 } 00:07:22.507 ] 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.507 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.507 "name": "Existed_Raid", 00:07:22.507 "uuid": "02a1788c-fbc7-4872-9f20-a124377a1c82", 00:07:22.507 "strip_size_kb": 0, 00:07:22.507 "state": "online", 00:07:22.507 "raid_level": "raid1", 00:07:22.507 "superblock": true, 00:07:22.507 "num_base_bdevs": 2, 00:07:22.507 "num_base_bdevs_discovered": 2, 00:07:22.507 "num_base_bdevs_operational": 2, 00:07:22.507 "base_bdevs_list": [ 00:07:22.507 { 00:07:22.507 "name": "BaseBdev1", 00:07:22.507 "uuid": "44e27d2a-04a1-49f7-bad6-f99b965dfe08", 00:07:22.507 "is_configured": true, 00:07:22.507 "data_offset": 2048, 00:07:22.507 "data_size": 63488 00:07:22.507 }, 00:07:22.507 { 00:07:22.508 "name": "BaseBdev2", 00:07:22.508 "uuid": "1e393e7d-bde7-4fe7-95dc-80809c201a07", 00:07:22.508 "is_configured": true, 00:07:22.508 "data_offset": 2048, 00:07:22.508 "data_size": 63488 00:07:22.508 } 00:07:22.508 ] 00:07:22.508 }' 00:07:22.508 01:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.508 01:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.078 [2024-12-07 01:51:28.327434] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.078 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.078 "name": "Existed_Raid", 00:07:23.078 "aliases": [ 00:07:23.078 "02a1788c-fbc7-4872-9f20-a124377a1c82" 00:07:23.078 ], 00:07:23.078 "product_name": "Raid Volume", 00:07:23.078 "block_size": 512, 00:07:23.078 "num_blocks": 63488, 00:07:23.078 "uuid": "02a1788c-fbc7-4872-9f20-a124377a1c82", 00:07:23.078 "assigned_rate_limits": { 00:07:23.078 "rw_ios_per_sec": 0, 00:07:23.078 "rw_mbytes_per_sec": 0, 00:07:23.078 "r_mbytes_per_sec": 0, 00:07:23.078 "w_mbytes_per_sec": 0 00:07:23.078 }, 00:07:23.078 "claimed": false, 00:07:23.078 "zoned": false, 00:07:23.078 "supported_io_types": { 00:07:23.078 "read": true, 00:07:23.078 "write": true, 00:07:23.078 "unmap": false, 00:07:23.078 "flush": false, 00:07:23.078 "reset": true, 00:07:23.078 "nvme_admin": false, 00:07:23.078 "nvme_io": false, 00:07:23.078 "nvme_io_md": false, 00:07:23.078 "write_zeroes": true, 00:07:23.078 "zcopy": false, 00:07:23.078 "get_zone_info": false, 00:07:23.078 "zone_management": false, 00:07:23.078 "zone_append": false, 00:07:23.078 "compare": false, 00:07:23.078 "compare_and_write": false, 00:07:23.078 "abort": false, 00:07:23.078 "seek_hole": false, 00:07:23.078 "seek_data": false, 00:07:23.078 "copy": false, 00:07:23.078 "nvme_iov_md": false 00:07:23.078 }, 00:07:23.078 "memory_domains": [ 00:07:23.078 { 00:07:23.078 "dma_device_id": "system", 00:07:23.078 "dma_device_type": 1 00:07:23.078 }, 00:07:23.078 { 00:07:23.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.078 "dma_device_type": 2 00:07:23.078 }, 00:07:23.078 { 00:07:23.078 "dma_device_id": "system", 00:07:23.078 "dma_device_type": 1 00:07:23.078 }, 00:07:23.078 { 00:07:23.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.078 "dma_device_type": 2 00:07:23.078 } 00:07:23.078 ], 00:07:23.078 "driver_specific": { 00:07:23.078 "raid": { 00:07:23.078 "uuid": "02a1788c-fbc7-4872-9f20-a124377a1c82", 00:07:23.078 "strip_size_kb": 0, 00:07:23.078 "state": "online", 00:07:23.078 "raid_level": "raid1", 00:07:23.078 "superblock": true, 00:07:23.078 "num_base_bdevs": 2, 00:07:23.078 "num_base_bdevs_discovered": 2, 00:07:23.078 "num_base_bdevs_operational": 2, 00:07:23.078 "base_bdevs_list": [ 00:07:23.078 { 00:07:23.078 "name": "BaseBdev1", 00:07:23.078 "uuid": "44e27d2a-04a1-49f7-bad6-f99b965dfe08", 00:07:23.078 "is_configured": true, 00:07:23.078 "data_offset": 2048, 00:07:23.078 "data_size": 63488 00:07:23.078 }, 00:07:23.078 { 00:07:23.078 "name": "BaseBdev2", 00:07:23.078 "uuid": "1e393e7d-bde7-4fe7-95dc-80809c201a07", 00:07:23.078 "is_configured": true, 00:07:23.078 "data_offset": 2048, 00:07:23.078 "data_size": 63488 00:07:23.078 } 00:07:23.079 ] 00:07:23.079 } 00:07:23.079 } 00:07:23.079 }' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.079 BaseBdev2' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.079 [2024-12-07 01:51:28.506949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.079 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.339 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.339 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.339 "name": "Existed_Raid", 00:07:23.339 "uuid": "02a1788c-fbc7-4872-9f20-a124377a1c82", 00:07:23.339 "strip_size_kb": 0, 00:07:23.339 "state": "online", 00:07:23.339 "raid_level": "raid1", 00:07:23.339 "superblock": true, 00:07:23.339 "num_base_bdevs": 2, 00:07:23.339 "num_base_bdevs_discovered": 1, 00:07:23.339 "num_base_bdevs_operational": 1, 00:07:23.339 "base_bdevs_list": [ 00:07:23.339 { 00:07:23.339 "name": null, 00:07:23.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.339 "is_configured": false, 00:07:23.339 "data_offset": 0, 00:07:23.339 "data_size": 63488 00:07:23.339 }, 00:07:23.339 { 00:07:23.339 "name": "BaseBdev2", 00:07:23.339 "uuid": "1e393e7d-bde7-4fe7-95dc-80809c201a07", 00:07:23.339 "is_configured": true, 00:07:23.339 "data_offset": 2048, 00:07:23.339 "data_size": 63488 00:07:23.339 } 00:07:23.339 ] 00:07:23.339 }' 00:07:23.339 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.339 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.599 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.600 [2024-12-07 01:51:28.981604] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.600 [2024-12-07 01:51:28.981767] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.600 [2024-12-07 01:51:28.993118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.600 [2024-12-07 01:51:28.993223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:23.600 [2024-12-07 01:51:28.993281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.600 01:51:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73960 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 73960 ']' 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 73960 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.600 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73960 00:07:23.860 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.860 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.860 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73960' 00:07:23.860 killing process with pid 73960 00:07:23.860 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 73960 00:07:23.860 [2024-12-07 01:51:29.079380] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.860 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 73960 00:07:23.860 [2024-12-07 01:51:29.080417] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:24.120 01:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:24.120 00:07:24.120 real 0m3.785s 00:07:24.120 user 0m5.919s 00:07:24.120 sys 0m0.762s 00:07:24.120 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.120 ************************************ 00:07:24.120 END TEST raid_state_function_test_sb 00:07:24.120 ************************************ 00:07:24.120 01:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.120 01:51:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:24.120 01:51:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:24.120 01:51:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.120 01:51:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:24.120 ************************************ 00:07:24.120 START TEST raid_superblock_test 00:07:24.120 ************************************ 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:24.120 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74201 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74201 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74201 ']' 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.121 01:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.121 [2024-12-07 01:51:29.468709] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.121 [2024-12-07 01:51:29.468931] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74201 ] 00:07:24.381 [2024-12-07 01:51:29.612732] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.381 [2024-12-07 01:51:29.656605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.381 [2024-12-07 01:51:29.698022] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.381 [2024-12-07 01:51:29.698144] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.950 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.951 malloc1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.951 [2024-12-07 01:51:30.323651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.951 [2024-12-07 01:51:30.323795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.951 [2024-12-07 01:51:30.323842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:24.951 [2024-12-07 01:51:30.323861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.951 [2024-12-07 01:51:30.325966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.951 [2024-12-07 01:51:30.326009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.951 pt1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.951 malloc2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.951 [2024-12-07 01:51:30.363089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.951 [2024-12-07 01:51:30.363227] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.951 [2024-12-07 01:51:30.363272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.951 [2024-12-07 01:51:30.363318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.951 [2024-12-07 01:51:30.365972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.951 [2024-12-07 01:51:30.366060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.951 pt2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.951 [2024-12-07 01:51:30.375086] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.951 [2024-12-07 01:51:30.376930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.951 [2024-12-07 01:51:30.377124] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:24.951 [2024-12-07 01:51:30.377173] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:24.951 [2024-12-07 01:51:30.377440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:24.951 [2024-12-07 01:51:30.377596] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:24.951 [2024-12-07 01:51:30.377634] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:24.951 [2024-12-07 01:51:30.377830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.951 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.211 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.211 "name": "raid_bdev1", 00:07:25.211 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:25.211 "strip_size_kb": 0, 00:07:25.211 "state": "online", 00:07:25.211 "raid_level": "raid1", 00:07:25.211 "superblock": true, 00:07:25.211 "num_base_bdevs": 2, 00:07:25.211 "num_base_bdevs_discovered": 2, 00:07:25.211 "num_base_bdevs_operational": 2, 00:07:25.211 "base_bdevs_list": [ 00:07:25.211 { 00:07:25.211 "name": "pt1", 00:07:25.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.211 "is_configured": true, 00:07:25.211 "data_offset": 2048, 00:07:25.211 "data_size": 63488 00:07:25.211 }, 00:07:25.211 { 00:07:25.211 "name": "pt2", 00:07:25.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.211 "is_configured": true, 00:07:25.211 "data_offset": 2048, 00:07:25.211 "data_size": 63488 00:07:25.211 } 00:07:25.211 ] 00:07:25.211 }' 00:07:25.211 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.211 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.471 [2024-12-07 01:51:30.770753] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.471 "name": "raid_bdev1", 00:07:25.471 "aliases": [ 00:07:25.471 "dfe44c84-824c-47ac-95a2-2827047c34cf" 00:07:25.471 ], 00:07:25.471 "product_name": "Raid Volume", 00:07:25.471 "block_size": 512, 00:07:25.471 "num_blocks": 63488, 00:07:25.471 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:25.471 "assigned_rate_limits": { 00:07:25.471 "rw_ios_per_sec": 0, 00:07:25.471 "rw_mbytes_per_sec": 0, 00:07:25.471 "r_mbytes_per_sec": 0, 00:07:25.471 "w_mbytes_per_sec": 0 00:07:25.471 }, 00:07:25.471 "claimed": false, 00:07:25.471 "zoned": false, 00:07:25.471 "supported_io_types": { 00:07:25.471 "read": true, 00:07:25.471 "write": true, 00:07:25.471 "unmap": false, 00:07:25.471 "flush": false, 00:07:25.471 "reset": true, 00:07:25.471 "nvme_admin": false, 00:07:25.471 "nvme_io": false, 00:07:25.471 "nvme_io_md": false, 00:07:25.471 "write_zeroes": true, 00:07:25.471 "zcopy": false, 00:07:25.471 "get_zone_info": false, 00:07:25.471 "zone_management": false, 00:07:25.471 "zone_append": false, 00:07:25.471 "compare": false, 00:07:25.471 "compare_and_write": false, 00:07:25.471 "abort": false, 00:07:25.471 "seek_hole": false, 00:07:25.471 "seek_data": false, 00:07:25.471 "copy": false, 00:07:25.471 "nvme_iov_md": false 00:07:25.471 }, 00:07:25.471 "memory_domains": [ 00:07:25.471 { 00:07:25.471 "dma_device_id": "system", 00:07:25.471 "dma_device_type": 1 00:07:25.471 }, 00:07:25.471 { 00:07:25.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.471 "dma_device_type": 2 00:07:25.471 }, 00:07:25.471 { 00:07:25.471 "dma_device_id": "system", 00:07:25.471 "dma_device_type": 1 00:07:25.471 }, 00:07:25.471 { 00:07:25.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.471 "dma_device_type": 2 00:07:25.471 } 00:07:25.471 ], 00:07:25.471 "driver_specific": { 00:07:25.471 "raid": { 00:07:25.471 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:25.471 "strip_size_kb": 0, 00:07:25.471 "state": "online", 00:07:25.471 "raid_level": "raid1", 00:07:25.471 "superblock": true, 00:07:25.471 "num_base_bdevs": 2, 00:07:25.471 "num_base_bdevs_discovered": 2, 00:07:25.471 "num_base_bdevs_operational": 2, 00:07:25.471 "base_bdevs_list": [ 00:07:25.471 { 00:07:25.471 "name": "pt1", 00:07:25.471 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.471 "is_configured": true, 00:07:25.471 "data_offset": 2048, 00:07:25.471 "data_size": 63488 00:07:25.471 }, 00:07:25.471 { 00:07:25.471 "name": "pt2", 00:07:25.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.471 "is_configured": true, 00:07:25.471 "data_offset": 2048, 00:07:25.471 "data_size": 63488 00:07:25.471 } 00:07:25.471 ] 00:07:25.471 } 00:07:25.471 } 00:07:25.471 }' 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:25.471 pt2' 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.471 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:25.732 [2024-12-07 01:51:30.970270] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.732 01:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dfe44c84-824c-47ac-95a2-2827047c34cf 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dfe44c84-824c-47ac-95a2-2827047c34cf ']' 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 [2024-12-07 01:51:31.017975] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.732 [2024-12-07 01:51:31.018002] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:25.732 [2024-12-07 01:51:31.018073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.732 [2024-12-07 01:51:31.018133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.732 [2024-12-07 01:51:31.018143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.732 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.733 [2024-12-07 01:51:31.141779] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:25.733 [2024-12-07 01:51:31.143656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:25.733 [2024-12-07 01:51:31.143792] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:25.733 [2024-12-07 01:51:31.143884] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:25.733 [2024-12-07 01:51:31.143936] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:25.733 [2024-12-07 01:51:31.143964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:25.733 request: 00:07:25.733 { 00:07:25.733 "name": "raid_bdev1", 00:07:25.733 "raid_level": "raid1", 00:07:25.733 "base_bdevs": [ 00:07:25.733 "malloc1", 00:07:25.733 "malloc2" 00:07:25.733 ], 00:07:25.733 "superblock": false, 00:07:25.733 "method": "bdev_raid_create", 00:07:25.733 "req_id": 1 00:07:25.733 } 00:07:25.733 Got JSON-RPC error response 00:07:25.733 response: 00:07:25.733 { 00:07:25.733 "code": -17, 00:07:25.733 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:25.733 } 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:25.733 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.993 [2024-12-07 01:51:31.205642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:25.993 [2024-12-07 01:51:31.205756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.993 [2024-12-07 01:51:31.205797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:25.993 [2024-12-07 01:51:31.205824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.993 [2024-12-07 01:51:31.207919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.993 [2024-12-07 01:51:31.207986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:25.993 [2024-12-07 01:51:31.208077] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:25.993 [2024-12-07 01:51:31.208125] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:25.993 pt1 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.993 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.993 "name": "raid_bdev1", 00:07:25.993 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:25.993 "strip_size_kb": 0, 00:07:25.993 "state": "configuring", 00:07:25.993 "raid_level": "raid1", 00:07:25.993 "superblock": true, 00:07:25.993 "num_base_bdevs": 2, 00:07:25.993 "num_base_bdevs_discovered": 1, 00:07:25.993 "num_base_bdevs_operational": 2, 00:07:25.993 "base_bdevs_list": [ 00:07:25.993 { 00:07:25.993 "name": "pt1", 00:07:25.994 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.994 "is_configured": true, 00:07:25.994 "data_offset": 2048, 00:07:25.994 "data_size": 63488 00:07:25.994 }, 00:07:25.994 { 00:07:25.994 "name": null, 00:07:25.994 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.994 "is_configured": false, 00:07:25.994 "data_offset": 2048, 00:07:25.994 "data_size": 63488 00:07:25.994 } 00:07:25.994 ] 00:07:25.994 }' 00:07:25.994 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.994 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.253 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.254 [2024-12-07 01:51:31.632936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:26.254 [2024-12-07 01:51:31.633081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:26.254 [2024-12-07 01:51:31.633107] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:26.254 [2024-12-07 01:51:31.633117] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:26.254 [2024-12-07 01:51:31.633534] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:26.254 [2024-12-07 01:51:31.633552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:26.254 [2024-12-07 01:51:31.633626] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:26.254 [2024-12-07 01:51:31.633647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:26.254 [2024-12-07 01:51:31.633766] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:26.254 [2024-12-07 01:51:31.633777] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:26.254 [2024-12-07 01:51:31.634021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:26.254 [2024-12-07 01:51:31.634131] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:26.254 [2024-12-07 01:51:31.634144] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:26.254 [2024-12-07 01:51:31.634247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.254 pt2 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.254 "name": "raid_bdev1", 00:07:26.254 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:26.254 "strip_size_kb": 0, 00:07:26.254 "state": "online", 00:07:26.254 "raid_level": "raid1", 00:07:26.254 "superblock": true, 00:07:26.254 "num_base_bdevs": 2, 00:07:26.254 "num_base_bdevs_discovered": 2, 00:07:26.254 "num_base_bdevs_operational": 2, 00:07:26.254 "base_bdevs_list": [ 00:07:26.254 { 00:07:26.254 "name": "pt1", 00:07:26.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.254 "is_configured": true, 00:07:26.254 "data_offset": 2048, 00:07:26.254 "data_size": 63488 00:07:26.254 }, 00:07:26.254 { 00:07:26.254 "name": "pt2", 00:07:26.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.254 "is_configured": true, 00:07:26.254 "data_offset": 2048, 00:07:26.254 "data_size": 63488 00:07:26.254 } 00:07:26.254 ] 00:07:26.254 }' 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.254 01:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.823 [2024-12-07 01:51:32.076438] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.823 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.823 "name": "raid_bdev1", 00:07:26.823 "aliases": [ 00:07:26.823 "dfe44c84-824c-47ac-95a2-2827047c34cf" 00:07:26.823 ], 00:07:26.823 "product_name": "Raid Volume", 00:07:26.823 "block_size": 512, 00:07:26.823 "num_blocks": 63488, 00:07:26.823 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:26.823 "assigned_rate_limits": { 00:07:26.823 "rw_ios_per_sec": 0, 00:07:26.823 "rw_mbytes_per_sec": 0, 00:07:26.823 "r_mbytes_per_sec": 0, 00:07:26.823 "w_mbytes_per_sec": 0 00:07:26.823 }, 00:07:26.823 "claimed": false, 00:07:26.823 "zoned": false, 00:07:26.823 "supported_io_types": { 00:07:26.823 "read": true, 00:07:26.823 "write": true, 00:07:26.823 "unmap": false, 00:07:26.823 "flush": false, 00:07:26.823 "reset": true, 00:07:26.823 "nvme_admin": false, 00:07:26.823 "nvme_io": false, 00:07:26.823 "nvme_io_md": false, 00:07:26.823 "write_zeroes": true, 00:07:26.823 "zcopy": false, 00:07:26.823 "get_zone_info": false, 00:07:26.823 "zone_management": false, 00:07:26.823 "zone_append": false, 00:07:26.824 "compare": false, 00:07:26.824 "compare_and_write": false, 00:07:26.824 "abort": false, 00:07:26.824 "seek_hole": false, 00:07:26.824 "seek_data": false, 00:07:26.824 "copy": false, 00:07:26.824 "nvme_iov_md": false 00:07:26.824 }, 00:07:26.824 "memory_domains": [ 00:07:26.824 { 00:07:26.824 "dma_device_id": "system", 00:07:26.824 "dma_device_type": 1 00:07:26.824 }, 00:07:26.824 { 00:07:26.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.824 "dma_device_type": 2 00:07:26.824 }, 00:07:26.824 { 00:07:26.824 "dma_device_id": "system", 00:07:26.824 "dma_device_type": 1 00:07:26.824 }, 00:07:26.824 { 00:07:26.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.824 "dma_device_type": 2 00:07:26.824 } 00:07:26.824 ], 00:07:26.824 "driver_specific": { 00:07:26.824 "raid": { 00:07:26.824 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:26.824 "strip_size_kb": 0, 00:07:26.824 "state": "online", 00:07:26.824 "raid_level": "raid1", 00:07:26.824 "superblock": true, 00:07:26.824 "num_base_bdevs": 2, 00:07:26.824 "num_base_bdevs_discovered": 2, 00:07:26.824 "num_base_bdevs_operational": 2, 00:07:26.824 "base_bdevs_list": [ 00:07:26.824 { 00:07:26.824 "name": "pt1", 00:07:26.824 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:26.824 "is_configured": true, 00:07:26.824 "data_offset": 2048, 00:07:26.824 "data_size": 63488 00:07:26.824 }, 00:07:26.824 { 00:07:26.824 "name": "pt2", 00:07:26.824 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:26.824 "is_configured": true, 00:07:26.824 "data_offset": 2048, 00:07:26.824 "data_size": 63488 00:07:26.824 } 00:07:26.824 ] 00:07:26.824 } 00:07:26.824 } 00:07:26.824 }' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:26.824 pt2' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.824 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.824 [2024-12-07 01:51:32.280084] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dfe44c84-824c-47ac-95a2-2827047c34cf '!=' dfe44c84-824c-47ac-95a2-2827047c34cf ']' 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.084 [2024-12-07 01:51:32.327821] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.084 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.085 "name": "raid_bdev1", 00:07:27.085 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:27.085 "strip_size_kb": 0, 00:07:27.085 "state": "online", 00:07:27.085 "raid_level": "raid1", 00:07:27.085 "superblock": true, 00:07:27.085 "num_base_bdevs": 2, 00:07:27.085 "num_base_bdevs_discovered": 1, 00:07:27.085 "num_base_bdevs_operational": 1, 00:07:27.085 "base_bdevs_list": [ 00:07:27.085 { 00:07:27.085 "name": null, 00:07:27.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.085 "is_configured": false, 00:07:27.085 "data_offset": 0, 00:07:27.085 "data_size": 63488 00:07:27.085 }, 00:07:27.085 { 00:07:27.085 "name": "pt2", 00:07:27.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.085 "is_configured": true, 00:07:27.085 "data_offset": 2048, 00:07:27.085 "data_size": 63488 00:07:27.085 } 00:07:27.085 ] 00:07:27.085 }' 00:07:27.085 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.085 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.345 [2024-12-07 01:51:32.739066] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.345 [2024-12-07 01:51:32.739098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.345 [2024-12-07 01:51:32.739174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.345 [2024-12-07 01:51:32.739220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.345 [2024-12-07 01:51:32.739229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.345 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.605 [2024-12-07 01:51:32.814957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:27.605 [2024-12-07 01:51:32.815015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.605 [2024-12-07 01:51:32.815037] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:27.605 [2024-12-07 01:51:32.815047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.605 [2024-12-07 01:51:32.817268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.605 [2024-12-07 01:51:32.817303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:27.605 [2024-12-07 01:51:32.817380] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:27.605 [2024-12-07 01:51:32.817411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.605 [2024-12-07 01:51:32.817488] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:27.605 [2024-12-07 01:51:32.817500] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:27.605 [2024-12-07 01:51:32.817756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:27.605 [2024-12-07 01:51:32.817924] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:27.605 [2024-12-07 01:51:32.817941] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:27.605 [2024-12-07 01:51:32.818047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.605 pt2 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.605 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.605 "name": "raid_bdev1", 00:07:27.605 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:27.605 "strip_size_kb": 0, 00:07:27.605 "state": "online", 00:07:27.605 "raid_level": "raid1", 00:07:27.605 "superblock": true, 00:07:27.605 "num_base_bdevs": 2, 00:07:27.606 "num_base_bdevs_discovered": 1, 00:07:27.606 "num_base_bdevs_operational": 1, 00:07:27.606 "base_bdevs_list": [ 00:07:27.606 { 00:07:27.606 "name": null, 00:07:27.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.606 "is_configured": false, 00:07:27.606 "data_offset": 2048, 00:07:27.606 "data_size": 63488 00:07:27.606 }, 00:07:27.606 { 00:07:27.606 "name": "pt2", 00:07:27.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.606 "is_configured": true, 00:07:27.606 "data_offset": 2048, 00:07:27.606 "data_size": 63488 00:07:27.606 } 00:07:27.606 ] 00:07:27.606 }' 00:07:27.606 01:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.606 01:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 [2024-12-07 01:51:33.198359] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.865 [2024-12-07 01:51:33.198451] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.865 [2024-12-07 01:51:33.198537] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.865 [2024-12-07 01:51:33.198599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.865 [2024-12-07 01:51:33.198673] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.865 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.865 [2024-12-07 01:51:33.258274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:27.865 [2024-12-07 01:51:33.258372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.865 [2024-12-07 01:51:33.258404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:27.865 [2024-12-07 01:51:33.258435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.865 [2024-12-07 01:51:33.260602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.865 [2024-12-07 01:51:33.260680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:27.865 [2024-12-07 01:51:33.260772] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:27.865 [2024-12-07 01:51:33.260855] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:27.865 [2024-12-07 01:51:33.260992] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:27.865 [2024-12-07 01:51:33.261050] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.865 [2024-12-07 01:51:33.261091] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:07:27.865 [2024-12-07 01:51:33.261160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:27.866 [2024-12-07 01:51:33.261257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:07:27.866 [2024-12-07 01:51:33.261296] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:27.866 [2024-12-07 01:51:33.261527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:27.866 [2024-12-07 01:51:33.261693] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:07:27.866 [2024-12-07 01:51:33.261736] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:07:27.866 [2024-12-07 01:51:33.261881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.866 pt1 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.866 "name": "raid_bdev1", 00:07:27.866 "uuid": "dfe44c84-824c-47ac-95a2-2827047c34cf", 00:07:27.866 "strip_size_kb": 0, 00:07:27.866 "state": "online", 00:07:27.866 "raid_level": "raid1", 00:07:27.866 "superblock": true, 00:07:27.866 "num_base_bdevs": 2, 00:07:27.866 "num_base_bdevs_discovered": 1, 00:07:27.866 "num_base_bdevs_operational": 1, 00:07:27.866 "base_bdevs_list": [ 00:07:27.866 { 00:07:27.866 "name": null, 00:07:27.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.866 "is_configured": false, 00:07:27.866 "data_offset": 2048, 00:07:27.866 "data_size": 63488 00:07:27.866 }, 00:07:27.866 { 00:07:27.866 "name": "pt2", 00:07:27.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:27.866 "is_configured": true, 00:07:27.866 "data_offset": 2048, 00:07:27.866 "data_size": 63488 00:07:27.866 } 00:07:27.866 ] 00:07:27.866 }' 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.866 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.435 [2024-12-07 01:51:33.705728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' dfe44c84-824c-47ac-95a2-2827047c34cf '!=' dfe44c84-824c-47ac-95a2-2827047c34cf ']' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74201 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74201 ']' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74201 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74201 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.435 killing process with pid 74201 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74201' 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74201 00:07:28.435 [2024-12-07 01:51:33.779332] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.435 [2024-12-07 01:51:33.779413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.435 [2024-12-07 01:51:33.779461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.435 [2024-12-07 01:51:33.779471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:07:28.435 01:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74201 00:07:28.435 [2024-12-07 01:51:33.801622] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:28.695 01:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:28.695 00:07:28.695 real 0m4.651s 00:07:28.695 user 0m7.603s 00:07:28.695 sys 0m0.925s 00:07:28.695 01:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.695 01:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.695 ************************************ 00:07:28.695 END TEST raid_superblock_test 00:07:28.695 ************************************ 00:07:28.695 01:51:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:28.695 01:51:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:28.695 01:51:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.695 01:51:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:28.695 ************************************ 00:07:28.695 START TEST raid_read_error_test 00:07:28.695 ************************************ 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3SFiQ216nJ 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74509 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74509 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 74509 ']' 00:07:28.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:28.695 01:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.955 [2024-12-07 01:51:34.205297] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.955 [2024-12-07 01:51:34.205416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74509 ] 00:07:28.955 [2024-12-07 01:51:34.349187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.955 [2024-12-07 01:51:34.392867] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.215 [2024-12-07 01:51:34.434747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.215 [2024-12-07 01:51:34.434854] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 BaseBdev1_malloc 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 true 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 [2024-12-07 01:51:35.064881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.784 [2024-12-07 01:51:35.064941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.784 [2024-12-07 01:51:35.064977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:29.784 [2024-12-07 01:51:35.064992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.784 [2024-12-07 01:51:35.067061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.784 [2024-12-07 01:51:35.067181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.784 BaseBdev1 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 BaseBdev2_malloc 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 true 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 [2024-12-07 01:51:35.115115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:29.784 [2024-12-07 01:51:35.115252] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.784 [2024-12-07 01:51:35.115276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:29.784 [2024-12-07 01:51:35.115286] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.784 [2024-12-07 01:51:35.117312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.784 [2024-12-07 01:51:35.117355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:29.784 BaseBdev2 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 [2024-12-07 01:51:35.127168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:29.784 [2024-12-07 01:51:35.128923] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:29.784 [2024-12-07 01:51:35.129097] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:29.784 [2024-12-07 01:51:35.129110] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:29.784 [2024-12-07 01:51:35.129367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:29.784 [2024-12-07 01:51:35.129509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:29.784 [2024-12-07 01:51:35.129521] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:29.784 [2024-12-07 01:51:35.129630] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.784 "name": "raid_bdev1", 00:07:29.784 "uuid": "beeccd05-3a1c-46cb-b159-bf5145806f33", 00:07:29.784 "strip_size_kb": 0, 00:07:29.784 "state": "online", 00:07:29.784 "raid_level": "raid1", 00:07:29.784 "superblock": true, 00:07:29.784 "num_base_bdevs": 2, 00:07:29.784 "num_base_bdevs_discovered": 2, 00:07:29.784 "num_base_bdevs_operational": 2, 00:07:29.784 "base_bdevs_list": [ 00:07:29.784 { 00:07:29.784 "name": "BaseBdev1", 00:07:29.784 "uuid": "82df35de-e19c-505f-9e25-92484db7b3bf", 00:07:29.784 "is_configured": true, 00:07:29.784 "data_offset": 2048, 00:07:29.784 "data_size": 63488 00:07:29.784 }, 00:07:29.784 { 00:07:29.784 "name": "BaseBdev2", 00:07:29.784 "uuid": "1eb205c2-ffbf-5c5e-a43e-3ff7275c7672", 00:07:29.784 "is_configured": true, 00:07:29.784 "data_offset": 2048, 00:07:29.784 "data_size": 63488 00:07:29.784 } 00:07:29.784 ] 00:07:29.784 }' 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.784 01:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.353 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:30.353 01:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:30.353 [2024-12-07 01:51:35.634731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.291 "name": "raid_bdev1", 00:07:31.291 "uuid": "beeccd05-3a1c-46cb-b159-bf5145806f33", 00:07:31.291 "strip_size_kb": 0, 00:07:31.291 "state": "online", 00:07:31.291 "raid_level": "raid1", 00:07:31.291 "superblock": true, 00:07:31.291 "num_base_bdevs": 2, 00:07:31.291 "num_base_bdevs_discovered": 2, 00:07:31.291 "num_base_bdevs_operational": 2, 00:07:31.291 "base_bdevs_list": [ 00:07:31.291 { 00:07:31.291 "name": "BaseBdev1", 00:07:31.291 "uuid": "82df35de-e19c-505f-9e25-92484db7b3bf", 00:07:31.291 "is_configured": true, 00:07:31.291 "data_offset": 2048, 00:07:31.291 "data_size": 63488 00:07:31.291 }, 00:07:31.291 { 00:07:31.291 "name": "BaseBdev2", 00:07:31.291 "uuid": "1eb205c2-ffbf-5c5e-a43e-3ff7275c7672", 00:07:31.291 "is_configured": true, 00:07:31.291 "data_offset": 2048, 00:07:31.291 "data_size": 63488 00:07:31.291 } 00:07:31.291 ] 00:07:31.291 }' 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.291 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.550 01:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.550 01:51:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.550 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.550 [2024-12-07 01:51:37.006201] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.550 [2024-12-07 01:51:37.006327] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.550 [2024-12-07 01:51:37.008967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.550 [2024-12-07 01:51:37.009052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.550 [2024-12-07 01:51:37.009157] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.550 [2024-12-07 01:51:37.009211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:31.809 { 00:07:31.809 "results": [ 00:07:31.809 { 00:07:31.809 "job": "raid_bdev1", 00:07:31.809 "core_mask": "0x1", 00:07:31.809 "workload": "randrw", 00:07:31.809 "percentage": 50, 00:07:31.809 "status": "finished", 00:07:31.809 "queue_depth": 1, 00:07:31.809 "io_size": 131072, 00:07:31.809 "runtime": 1.372426, 00:07:31.809 "iops": 20048.439770158828, 00:07:31.809 "mibps": 2506.0549712698535, 00:07:31.809 "io_failed": 0, 00:07:31.809 "io_timeout": 0, 00:07:31.809 "avg_latency_us": 47.39786104760642, 00:07:31.809 "min_latency_us": 21.687336244541484, 00:07:31.809 "max_latency_us": 1466.6899563318777 00:07:31.809 } 00:07:31.809 ], 00:07:31.809 "core_count": 1 00:07:31.809 } 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74509 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 74509 ']' 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 74509 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74509 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74509' 00:07:31.809 killing process with pid 74509 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 74509 00:07:31.809 [2024-12-07 01:51:37.040258] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.809 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 74509 00:07:31.809 [2024-12-07 01:51:37.055818] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3SFiQ216nJ 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:32.069 ************************************ 00:07:32.069 END TEST raid_read_error_test 00:07:32.069 ************************************ 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:32.069 00:07:32.069 real 0m3.182s 00:07:32.069 user 0m4.016s 00:07:32.069 sys 0m0.467s 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:32.069 01:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 01:51:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:32.069 01:51:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:32.069 01:51:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:32.069 01:51:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 ************************************ 00:07:32.069 START TEST raid_write_error_test 00:07:32.069 ************************************ 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4zLqZC44g3 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74638 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74638 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 74638 ']' 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.069 01:51:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.069 [2024-12-07 01:51:37.460147] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.069 [2024-12-07 01:51:37.460341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74638 ] 00:07:32.328 [2024-12-07 01:51:37.602476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.328 [2024-12-07 01:51:37.649468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.328 [2024-12-07 01:51:37.692133] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.328 [2024-12-07 01:51:37.692218] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 BaseBdev1_malloc 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 true 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 [2024-12-07 01:51:38.321997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:32.941 [2024-12-07 01:51:38.322069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.941 [2024-12-07 01:51:38.322092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:32.941 [2024-12-07 01:51:38.322100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.941 [2024-12-07 01:51:38.324303] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.941 [2024-12-07 01:51:38.324344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:32.941 BaseBdev1 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 BaseBdev2_malloc 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 true 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 [2024-12-07 01:51:38.380521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:32.941 [2024-12-07 01:51:38.380607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.941 [2024-12-07 01:51:38.380639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:32.941 [2024-12-07 01:51:38.380654] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.941 [2024-12-07 01:51:38.383863] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.941 [2024-12-07 01:51:38.384009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:32.941 BaseBdev2 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.941 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.941 [2024-12-07 01:51:38.392687] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.941 [2024-12-07 01:51:38.394705] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.941 [2024-12-07 01:51:38.394970] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:32.941 [2024-12-07 01:51:38.394993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:32.941 [2024-12-07 01:51:38.395251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:32.941 [2024-12-07 01:51:38.395406] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:32.941 [2024-12-07 01:51:38.395420] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:32.941 [2024-12-07 01:51:38.395559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.200 "name": "raid_bdev1", 00:07:33.200 "uuid": "a888e407-0d19-4ea3-9047-30c50e922f99", 00:07:33.200 "strip_size_kb": 0, 00:07:33.200 "state": "online", 00:07:33.200 "raid_level": "raid1", 00:07:33.200 "superblock": true, 00:07:33.200 "num_base_bdevs": 2, 00:07:33.200 "num_base_bdevs_discovered": 2, 00:07:33.200 "num_base_bdevs_operational": 2, 00:07:33.200 "base_bdevs_list": [ 00:07:33.200 { 00:07:33.200 "name": "BaseBdev1", 00:07:33.200 "uuid": "6b5ecd3f-c077-528c-b8d0-60b6ac735110", 00:07:33.200 "is_configured": true, 00:07:33.200 "data_offset": 2048, 00:07:33.200 "data_size": 63488 00:07:33.200 }, 00:07:33.200 { 00:07:33.200 "name": "BaseBdev2", 00:07:33.200 "uuid": "63b467b1-c643-5c28-ad1b-1a8b867caddb", 00:07:33.200 "is_configured": true, 00:07:33.200 "data_offset": 2048, 00:07:33.200 "data_size": 63488 00:07:33.200 } 00:07:33.200 ] 00:07:33.200 }' 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.200 01:51:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.460 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:33.460 01:51:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:33.460 [2024-12-07 01:51:38.916186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.397 [2024-12-07 01:51:39.832109] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:34.397 [2024-12-07 01:51:39.832270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:34.397 [2024-12-07 01:51:39.832536] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002530 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.397 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:34.398 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.398 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.657 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.657 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.657 "name": "raid_bdev1", 00:07:34.657 "uuid": "a888e407-0d19-4ea3-9047-30c50e922f99", 00:07:34.657 "strip_size_kb": 0, 00:07:34.657 "state": "online", 00:07:34.657 "raid_level": "raid1", 00:07:34.657 "superblock": true, 00:07:34.657 "num_base_bdevs": 2, 00:07:34.657 "num_base_bdevs_discovered": 1, 00:07:34.657 "num_base_bdevs_operational": 1, 00:07:34.657 "base_bdevs_list": [ 00:07:34.657 { 00:07:34.657 "name": null, 00:07:34.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:34.657 "is_configured": false, 00:07:34.657 "data_offset": 0, 00:07:34.657 "data_size": 63488 00:07:34.657 }, 00:07:34.657 { 00:07:34.657 "name": "BaseBdev2", 00:07:34.657 "uuid": "63b467b1-c643-5c28-ad1b-1a8b867caddb", 00:07:34.657 "is_configured": true, 00:07:34.657 "data_offset": 2048, 00:07:34.657 "data_size": 63488 00:07:34.657 } 00:07:34.657 ] 00:07:34.657 }' 00:07:34.657 01:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.657 01:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.916 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:34.916 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:34.916 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.917 [2024-12-07 01:51:40.268925] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.917 [2024-12-07 01:51:40.268967] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.917 [2024-12-07 01:51:40.271300] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.917 [2024-12-07 01:51:40.271344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.917 [2024-12-07 01:51:40.271391] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.917 [2024-12-07 01:51:40.271409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:34.917 { 00:07:34.917 "results": [ 00:07:34.917 { 00:07:34.917 "job": "raid_bdev1", 00:07:34.917 "core_mask": "0x1", 00:07:34.917 "workload": "randrw", 00:07:34.917 "percentage": 50, 00:07:34.917 "status": "finished", 00:07:34.917 "queue_depth": 1, 00:07:34.917 "io_size": 131072, 00:07:34.917 "runtime": 1.353408, 00:07:34.917 "iops": 23177.785265049417, 00:07:34.917 "mibps": 2897.223158131177, 00:07:34.917 "io_failed": 0, 00:07:34.917 "io_timeout": 0, 00:07:34.917 "avg_latency_us": 40.613898097877346, 00:07:34.917 "min_latency_us": 21.128384279475984, 00:07:34.917 "max_latency_us": 1359.3711790393013 00:07:34.917 } 00:07:34.917 ], 00:07:34.917 "core_count": 1 00:07:34.917 } 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74638 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 74638 ']' 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 74638 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74638 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74638' 00:07:34.917 killing process with pid 74638 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 74638 00:07:34.917 [2024-12-07 01:51:40.318330] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.917 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 74638 00:07:34.917 [2024-12-07 01:51:40.333964] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4zLqZC44g3 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:35.177 00:07:35.177 real 0m3.213s 00:07:35.177 user 0m4.092s 00:07:35.177 sys 0m0.482s 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.177 01:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.177 ************************************ 00:07:35.177 END TEST raid_write_error_test 00:07:35.177 ************************************ 00:07:35.177 01:51:40 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:35.177 01:51:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.177 01:51:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:35.437 01:51:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:35.437 01:51:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.437 01:51:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.437 ************************************ 00:07:35.437 START TEST raid_state_function_test 00:07:35.437 ************************************ 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74771 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74771' 00:07:35.437 Process raid pid: 74771 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74771 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 74771 ']' 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:35.437 01:51:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.437 [2024-12-07 01:51:40.737987] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.437 [2024-12-07 01:51:40.738199] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.437 [2024-12-07 01:51:40.881043] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.697 [2024-12-07 01:51:40.927745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.697 [2024-12-07 01:51:40.969563] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.697 [2024-12-07 01:51:40.969688] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.268 [2024-12-07 01:51:41.566746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.268 [2024-12-07 01:51:41.566884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.268 [2024-12-07 01:51:41.566918] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.268 [2024-12-07 01:51:41.566958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.268 [2024-12-07 01:51:41.566977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.268 [2024-12-07 01:51:41.567000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.268 "name": "Existed_Raid", 00:07:36.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.268 "strip_size_kb": 64, 00:07:36.268 "state": "configuring", 00:07:36.268 "raid_level": "raid0", 00:07:36.268 "superblock": false, 00:07:36.268 "num_base_bdevs": 3, 00:07:36.268 "num_base_bdevs_discovered": 0, 00:07:36.268 "num_base_bdevs_operational": 3, 00:07:36.268 "base_bdevs_list": [ 00:07:36.268 { 00:07:36.268 "name": "BaseBdev1", 00:07:36.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.268 "is_configured": false, 00:07:36.268 "data_offset": 0, 00:07:36.268 "data_size": 0 00:07:36.268 }, 00:07:36.268 { 00:07:36.268 "name": "BaseBdev2", 00:07:36.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.268 "is_configured": false, 00:07:36.268 "data_offset": 0, 00:07:36.268 "data_size": 0 00:07:36.268 }, 00:07:36.268 { 00:07:36.268 "name": "BaseBdev3", 00:07:36.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.268 "is_configured": false, 00:07:36.268 "data_offset": 0, 00:07:36.268 "data_size": 0 00:07:36.268 } 00:07:36.268 ] 00:07:36.268 }' 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.268 01:51:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.865 [2024-12-07 01:51:42.041694] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.865 [2024-12-07 01:51:42.041793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.865 [2024-12-07 01:51:42.053688] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.865 [2024-12-07 01:51:42.053728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.865 [2024-12-07 01:51:42.053736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.865 [2024-12-07 01:51:42.053744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.865 [2024-12-07 01:51:42.053750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.865 [2024-12-07 01:51:42.053758] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.865 [2024-12-07 01:51:42.074211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.865 BaseBdev1 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.865 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.865 [ 00:07:36.865 { 00:07:36.865 "name": "BaseBdev1", 00:07:36.865 "aliases": [ 00:07:36.865 "e67125b1-e326-4ac6-802f-0fa31f6e62e0" 00:07:36.865 ], 00:07:36.865 "product_name": "Malloc disk", 00:07:36.865 "block_size": 512, 00:07:36.865 "num_blocks": 65536, 00:07:36.865 "uuid": "e67125b1-e326-4ac6-802f-0fa31f6e62e0", 00:07:36.865 "assigned_rate_limits": { 00:07:36.865 "rw_ios_per_sec": 0, 00:07:36.865 "rw_mbytes_per_sec": 0, 00:07:36.865 "r_mbytes_per_sec": 0, 00:07:36.865 "w_mbytes_per_sec": 0 00:07:36.865 }, 00:07:36.865 "claimed": true, 00:07:36.865 "claim_type": "exclusive_write", 00:07:36.865 "zoned": false, 00:07:36.865 "supported_io_types": { 00:07:36.865 "read": true, 00:07:36.865 "write": true, 00:07:36.865 "unmap": true, 00:07:36.865 "flush": true, 00:07:36.865 "reset": true, 00:07:36.865 "nvme_admin": false, 00:07:36.865 "nvme_io": false, 00:07:36.865 "nvme_io_md": false, 00:07:36.865 "write_zeroes": true, 00:07:36.865 "zcopy": true, 00:07:36.865 "get_zone_info": false, 00:07:36.865 "zone_management": false, 00:07:36.865 "zone_append": false, 00:07:36.865 "compare": false, 00:07:36.865 "compare_and_write": false, 00:07:36.865 "abort": true, 00:07:36.865 "seek_hole": false, 00:07:36.865 "seek_data": false, 00:07:36.865 "copy": true, 00:07:36.865 "nvme_iov_md": false 00:07:36.865 }, 00:07:36.866 "memory_domains": [ 00:07:36.866 { 00:07:36.866 "dma_device_id": "system", 00:07:36.866 "dma_device_type": 1 00:07:36.866 }, 00:07:36.866 { 00:07:36.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.866 "dma_device_type": 2 00:07:36.866 } 00:07:36.866 ], 00:07:36.866 "driver_specific": {} 00:07:36.866 } 00:07:36.866 ] 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.866 "name": "Existed_Raid", 00:07:36.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.866 "strip_size_kb": 64, 00:07:36.866 "state": "configuring", 00:07:36.866 "raid_level": "raid0", 00:07:36.866 "superblock": false, 00:07:36.866 "num_base_bdevs": 3, 00:07:36.866 "num_base_bdevs_discovered": 1, 00:07:36.866 "num_base_bdevs_operational": 3, 00:07:36.866 "base_bdevs_list": [ 00:07:36.866 { 00:07:36.866 "name": "BaseBdev1", 00:07:36.866 "uuid": "e67125b1-e326-4ac6-802f-0fa31f6e62e0", 00:07:36.866 "is_configured": true, 00:07:36.866 "data_offset": 0, 00:07:36.866 "data_size": 65536 00:07:36.866 }, 00:07:36.866 { 00:07:36.866 "name": "BaseBdev2", 00:07:36.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.866 "is_configured": false, 00:07:36.866 "data_offset": 0, 00:07:36.866 "data_size": 0 00:07:36.866 }, 00:07:36.866 { 00:07:36.866 "name": "BaseBdev3", 00:07:36.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.866 "is_configured": false, 00:07:36.866 "data_offset": 0, 00:07:36.866 "data_size": 0 00:07:36.866 } 00:07:36.866 ] 00:07:36.866 }' 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.866 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 [2024-12-07 01:51:42.553460] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.126 [2024-12-07 01:51:42.553509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.126 [2024-12-07 01:51:42.565492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.126 [2024-12-07 01:51:42.567296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.126 [2024-12-07 01:51:42.567399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.126 [2024-12-07 01:51:42.567414] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.126 [2024-12-07 01:51:42.567424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.126 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.386 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.386 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.386 "name": "Existed_Raid", 00:07:37.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.386 "strip_size_kb": 64, 00:07:37.386 "state": "configuring", 00:07:37.386 "raid_level": "raid0", 00:07:37.386 "superblock": false, 00:07:37.386 "num_base_bdevs": 3, 00:07:37.386 "num_base_bdevs_discovered": 1, 00:07:37.386 "num_base_bdevs_operational": 3, 00:07:37.386 "base_bdevs_list": [ 00:07:37.386 { 00:07:37.386 "name": "BaseBdev1", 00:07:37.386 "uuid": "e67125b1-e326-4ac6-802f-0fa31f6e62e0", 00:07:37.386 "is_configured": true, 00:07:37.386 "data_offset": 0, 00:07:37.386 "data_size": 65536 00:07:37.386 }, 00:07:37.386 { 00:07:37.386 "name": "BaseBdev2", 00:07:37.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.386 "is_configured": false, 00:07:37.386 "data_offset": 0, 00:07:37.386 "data_size": 0 00:07:37.386 }, 00:07:37.386 { 00:07:37.386 "name": "BaseBdev3", 00:07:37.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.386 "is_configured": false, 00:07:37.386 "data_offset": 0, 00:07:37.386 "data_size": 0 00:07:37.386 } 00:07:37.386 ] 00:07:37.386 }' 00:07:37.386 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.386 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.647 [2024-12-07 01:51:42.995328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.647 BaseBdev2 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.647 01:51:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.647 [ 00:07:37.647 { 00:07:37.647 "name": "BaseBdev2", 00:07:37.647 "aliases": [ 00:07:37.647 "cbfc7896-1b95-4d0e-9de1-f64b2059dd38" 00:07:37.647 ], 00:07:37.647 "product_name": "Malloc disk", 00:07:37.647 "block_size": 512, 00:07:37.647 "num_blocks": 65536, 00:07:37.647 "uuid": "cbfc7896-1b95-4d0e-9de1-f64b2059dd38", 00:07:37.647 "assigned_rate_limits": { 00:07:37.647 "rw_ios_per_sec": 0, 00:07:37.647 "rw_mbytes_per_sec": 0, 00:07:37.647 "r_mbytes_per_sec": 0, 00:07:37.647 "w_mbytes_per_sec": 0 00:07:37.647 }, 00:07:37.647 "claimed": true, 00:07:37.647 "claim_type": "exclusive_write", 00:07:37.647 "zoned": false, 00:07:37.647 "supported_io_types": { 00:07:37.647 "read": true, 00:07:37.647 "write": true, 00:07:37.647 "unmap": true, 00:07:37.647 "flush": true, 00:07:37.647 "reset": true, 00:07:37.647 "nvme_admin": false, 00:07:37.647 "nvme_io": false, 00:07:37.647 "nvme_io_md": false, 00:07:37.647 "write_zeroes": true, 00:07:37.647 "zcopy": true, 00:07:37.647 "get_zone_info": false, 00:07:37.647 "zone_management": false, 00:07:37.647 "zone_append": false, 00:07:37.647 "compare": false, 00:07:37.647 "compare_and_write": false, 00:07:37.647 "abort": true, 00:07:37.647 "seek_hole": false, 00:07:37.647 "seek_data": false, 00:07:37.647 "copy": true, 00:07:37.647 "nvme_iov_md": false 00:07:37.647 }, 00:07:37.647 "memory_domains": [ 00:07:37.647 { 00:07:37.647 "dma_device_id": "system", 00:07:37.647 "dma_device_type": 1 00:07:37.647 }, 00:07:37.647 { 00:07:37.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.647 "dma_device_type": 2 00:07:37.647 } 00:07:37.647 ], 00:07:37.647 "driver_specific": {} 00:07:37.647 } 00:07:37.647 ] 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.647 "name": "Existed_Raid", 00:07:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.647 "strip_size_kb": 64, 00:07:37.647 "state": "configuring", 00:07:37.647 "raid_level": "raid0", 00:07:37.647 "superblock": false, 00:07:37.647 "num_base_bdevs": 3, 00:07:37.647 "num_base_bdevs_discovered": 2, 00:07:37.647 "num_base_bdevs_operational": 3, 00:07:37.647 "base_bdevs_list": [ 00:07:37.647 { 00:07:37.647 "name": "BaseBdev1", 00:07:37.647 "uuid": "e67125b1-e326-4ac6-802f-0fa31f6e62e0", 00:07:37.647 "is_configured": true, 00:07:37.647 "data_offset": 0, 00:07:37.647 "data_size": 65536 00:07:37.647 }, 00:07:37.647 { 00:07:37.647 "name": "BaseBdev2", 00:07:37.647 "uuid": "cbfc7896-1b95-4d0e-9de1-f64b2059dd38", 00:07:37.647 "is_configured": true, 00:07:37.647 "data_offset": 0, 00:07:37.647 "data_size": 65536 00:07:37.647 }, 00:07:37.647 { 00:07:37.647 "name": "BaseBdev3", 00:07:37.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.647 "is_configured": false, 00:07:37.647 "data_offset": 0, 00:07:37.647 "data_size": 0 00:07:37.647 } 00:07:37.647 ] 00:07:37.647 }' 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.647 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.217 [2024-12-07 01:51:43.497219] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:38.217 [2024-12-07 01:51:43.497260] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:38.217 [2024-12-07 01:51:43.497273] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:38.217 [2024-12-07 01:51:43.497532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:38.217 [2024-12-07 01:51:43.497666] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:38.217 [2024-12-07 01:51:43.497694] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:38.217 [2024-12-07 01:51:43.497878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.217 BaseBdev3 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.217 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.217 [ 00:07:38.217 { 00:07:38.217 "name": "BaseBdev3", 00:07:38.217 "aliases": [ 00:07:38.217 "18305564-fdcd-45ee-8f16-0c685a184bd0" 00:07:38.217 ], 00:07:38.217 "product_name": "Malloc disk", 00:07:38.217 "block_size": 512, 00:07:38.217 "num_blocks": 65536, 00:07:38.217 "uuid": "18305564-fdcd-45ee-8f16-0c685a184bd0", 00:07:38.218 "assigned_rate_limits": { 00:07:38.218 "rw_ios_per_sec": 0, 00:07:38.218 "rw_mbytes_per_sec": 0, 00:07:38.218 "r_mbytes_per_sec": 0, 00:07:38.218 "w_mbytes_per_sec": 0 00:07:38.218 }, 00:07:38.218 "claimed": true, 00:07:38.218 "claim_type": "exclusive_write", 00:07:38.218 "zoned": false, 00:07:38.218 "supported_io_types": { 00:07:38.218 "read": true, 00:07:38.218 "write": true, 00:07:38.218 "unmap": true, 00:07:38.218 "flush": true, 00:07:38.218 "reset": true, 00:07:38.218 "nvme_admin": false, 00:07:38.218 "nvme_io": false, 00:07:38.218 "nvme_io_md": false, 00:07:38.218 "write_zeroes": true, 00:07:38.218 "zcopy": true, 00:07:38.218 "get_zone_info": false, 00:07:38.218 "zone_management": false, 00:07:38.218 "zone_append": false, 00:07:38.218 "compare": false, 00:07:38.218 "compare_and_write": false, 00:07:38.218 "abort": true, 00:07:38.218 "seek_hole": false, 00:07:38.218 "seek_data": false, 00:07:38.218 "copy": true, 00:07:38.218 "nvme_iov_md": false 00:07:38.218 }, 00:07:38.218 "memory_domains": [ 00:07:38.218 { 00:07:38.218 "dma_device_id": "system", 00:07:38.218 "dma_device_type": 1 00:07:38.218 }, 00:07:38.218 { 00:07:38.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.218 "dma_device_type": 2 00:07:38.218 } 00:07:38.218 ], 00:07:38.218 "driver_specific": {} 00:07:38.218 } 00:07:38.218 ] 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.218 "name": "Existed_Raid", 00:07:38.218 "uuid": "2e3a26a0-80f6-4d5f-a47f-d734da7f2251", 00:07:38.218 "strip_size_kb": 64, 00:07:38.218 "state": "online", 00:07:38.218 "raid_level": "raid0", 00:07:38.218 "superblock": false, 00:07:38.218 "num_base_bdevs": 3, 00:07:38.218 "num_base_bdevs_discovered": 3, 00:07:38.218 "num_base_bdevs_operational": 3, 00:07:38.218 "base_bdevs_list": [ 00:07:38.218 { 00:07:38.218 "name": "BaseBdev1", 00:07:38.218 "uuid": "e67125b1-e326-4ac6-802f-0fa31f6e62e0", 00:07:38.218 "is_configured": true, 00:07:38.218 "data_offset": 0, 00:07:38.218 "data_size": 65536 00:07:38.218 }, 00:07:38.218 { 00:07:38.218 "name": "BaseBdev2", 00:07:38.218 "uuid": "cbfc7896-1b95-4d0e-9de1-f64b2059dd38", 00:07:38.218 "is_configured": true, 00:07:38.218 "data_offset": 0, 00:07:38.218 "data_size": 65536 00:07:38.218 }, 00:07:38.218 { 00:07:38.218 "name": "BaseBdev3", 00:07:38.218 "uuid": "18305564-fdcd-45ee-8f16-0c685a184bd0", 00:07:38.218 "is_configured": true, 00:07:38.218 "data_offset": 0, 00:07:38.218 "data_size": 65536 00:07:38.218 } 00:07:38.218 ] 00:07:38.218 }' 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.218 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.787 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.788 01:51:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.788 [2024-12-07 01:51:43.988745] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.788 "name": "Existed_Raid", 00:07:38.788 "aliases": [ 00:07:38.788 "2e3a26a0-80f6-4d5f-a47f-d734da7f2251" 00:07:38.788 ], 00:07:38.788 "product_name": "Raid Volume", 00:07:38.788 "block_size": 512, 00:07:38.788 "num_blocks": 196608, 00:07:38.788 "uuid": "2e3a26a0-80f6-4d5f-a47f-d734da7f2251", 00:07:38.788 "assigned_rate_limits": { 00:07:38.788 "rw_ios_per_sec": 0, 00:07:38.788 "rw_mbytes_per_sec": 0, 00:07:38.788 "r_mbytes_per_sec": 0, 00:07:38.788 "w_mbytes_per_sec": 0 00:07:38.788 }, 00:07:38.788 "claimed": false, 00:07:38.788 "zoned": false, 00:07:38.788 "supported_io_types": { 00:07:38.788 "read": true, 00:07:38.788 "write": true, 00:07:38.788 "unmap": true, 00:07:38.788 "flush": true, 00:07:38.788 "reset": true, 00:07:38.788 "nvme_admin": false, 00:07:38.788 "nvme_io": false, 00:07:38.788 "nvme_io_md": false, 00:07:38.788 "write_zeroes": true, 00:07:38.788 "zcopy": false, 00:07:38.788 "get_zone_info": false, 00:07:38.788 "zone_management": false, 00:07:38.788 "zone_append": false, 00:07:38.788 "compare": false, 00:07:38.788 "compare_and_write": false, 00:07:38.788 "abort": false, 00:07:38.788 "seek_hole": false, 00:07:38.788 "seek_data": false, 00:07:38.788 "copy": false, 00:07:38.788 "nvme_iov_md": false 00:07:38.788 }, 00:07:38.788 "memory_domains": [ 00:07:38.788 { 00:07:38.788 "dma_device_id": "system", 00:07:38.788 "dma_device_type": 1 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.788 "dma_device_type": 2 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "dma_device_id": "system", 00:07:38.788 "dma_device_type": 1 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.788 "dma_device_type": 2 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "dma_device_id": "system", 00:07:38.788 "dma_device_type": 1 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.788 "dma_device_type": 2 00:07:38.788 } 00:07:38.788 ], 00:07:38.788 "driver_specific": { 00:07:38.788 "raid": { 00:07:38.788 "uuid": "2e3a26a0-80f6-4d5f-a47f-d734da7f2251", 00:07:38.788 "strip_size_kb": 64, 00:07:38.788 "state": "online", 00:07:38.788 "raid_level": "raid0", 00:07:38.788 "superblock": false, 00:07:38.788 "num_base_bdevs": 3, 00:07:38.788 "num_base_bdevs_discovered": 3, 00:07:38.788 "num_base_bdevs_operational": 3, 00:07:38.788 "base_bdevs_list": [ 00:07:38.788 { 00:07:38.788 "name": "BaseBdev1", 00:07:38.788 "uuid": "e67125b1-e326-4ac6-802f-0fa31f6e62e0", 00:07:38.788 "is_configured": true, 00:07:38.788 "data_offset": 0, 00:07:38.788 "data_size": 65536 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "name": "BaseBdev2", 00:07:38.788 "uuid": "cbfc7896-1b95-4d0e-9de1-f64b2059dd38", 00:07:38.788 "is_configured": true, 00:07:38.788 "data_offset": 0, 00:07:38.788 "data_size": 65536 00:07:38.788 }, 00:07:38.788 { 00:07:38.788 "name": "BaseBdev3", 00:07:38.788 "uuid": "18305564-fdcd-45ee-8f16-0c685a184bd0", 00:07:38.788 "is_configured": true, 00:07:38.788 "data_offset": 0, 00:07:38.788 "data_size": 65536 00:07:38.788 } 00:07:38.788 ] 00:07:38.788 } 00:07:38.788 } 00:07:38.788 }' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.788 BaseBdev2 00:07:38.788 BaseBdev3' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.788 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 [2024-12-07 01:51:44.260037] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:39.049 [2024-12-07 01:51:44.260063] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.049 [2024-12-07 01:51:44.260132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.049 "name": "Existed_Raid", 00:07:39.049 "uuid": "2e3a26a0-80f6-4d5f-a47f-d734da7f2251", 00:07:39.049 "strip_size_kb": 64, 00:07:39.049 "state": "offline", 00:07:39.049 "raid_level": "raid0", 00:07:39.049 "superblock": false, 00:07:39.049 "num_base_bdevs": 3, 00:07:39.049 "num_base_bdevs_discovered": 2, 00:07:39.049 "num_base_bdevs_operational": 2, 00:07:39.049 "base_bdevs_list": [ 00:07:39.049 { 00:07:39.049 "name": null, 00:07:39.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.049 "is_configured": false, 00:07:39.049 "data_offset": 0, 00:07:39.049 "data_size": 65536 00:07:39.049 }, 00:07:39.049 { 00:07:39.049 "name": "BaseBdev2", 00:07:39.049 "uuid": "cbfc7896-1b95-4d0e-9de1-f64b2059dd38", 00:07:39.049 "is_configured": true, 00:07:39.049 "data_offset": 0, 00:07:39.049 "data_size": 65536 00:07:39.049 }, 00:07:39.049 { 00:07:39.049 "name": "BaseBdev3", 00:07:39.049 "uuid": "18305564-fdcd-45ee-8f16-0c685a184bd0", 00:07:39.049 "is_configured": true, 00:07:39.049 "data_offset": 0, 00:07:39.049 "data_size": 65536 00:07:39.049 } 00:07:39.049 ] 00:07:39.049 }' 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.049 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.309 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.309 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.310 [2024-12-07 01:51:44.726551] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.310 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.570 [2024-12-07 01:51:44.797759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:39.570 [2024-12-07 01:51:44.797853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.570 BaseBdev2 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.570 [ 00:07:39.570 { 00:07:39.570 "name": "BaseBdev2", 00:07:39.570 "aliases": [ 00:07:39.570 "ad0a77fd-7b13-4f2d-aa0b-04344149f131" 00:07:39.570 ], 00:07:39.570 "product_name": "Malloc disk", 00:07:39.570 "block_size": 512, 00:07:39.570 "num_blocks": 65536, 00:07:39.570 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:39.570 "assigned_rate_limits": { 00:07:39.570 "rw_ios_per_sec": 0, 00:07:39.570 "rw_mbytes_per_sec": 0, 00:07:39.570 "r_mbytes_per_sec": 0, 00:07:39.570 "w_mbytes_per_sec": 0 00:07:39.570 }, 00:07:39.570 "claimed": false, 00:07:39.570 "zoned": false, 00:07:39.570 "supported_io_types": { 00:07:39.570 "read": true, 00:07:39.570 "write": true, 00:07:39.570 "unmap": true, 00:07:39.570 "flush": true, 00:07:39.570 "reset": true, 00:07:39.570 "nvme_admin": false, 00:07:39.570 "nvme_io": false, 00:07:39.570 "nvme_io_md": false, 00:07:39.570 "write_zeroes": true, 00:07:39.570 "zcopy": true, 00:07:39.570 "get_zone_info": false, 00:07:39.570 "zone_management": false, 00:07:39.570 "zone_append": false, 00:07:39.570 "compare": false, 00:07:39.570 "compare_and_write": false, 00:07:39.570 "abort": true, 00:07:39.570 "seek_hole": false, 00:07:39.570 "seek_data": false, 00:07:39.570 "copy": true, 00:07:39.570 "nvme_iov_md": false 00:07:39.570 }, 00:07:39.570 "memory_domains": [ 00:07:39.570 { 00:07:39.570 "dma_device_id": "system", 00:07:39.570 "dma_device_type": 1 00:07:39.570 }, 00:07:39.570 { 00:07:39.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.570 "dma_device_type": 2 00:07:39.570 } 00:07:39.570 ], 00:07:39.570 "driver_specific": {} 00:07:39.570 } 00:07:39.570 ] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.570 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.570 BaseBdev3 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.571 [ 00:07:39.571 { 00:07:39.571 "name": "BaseBdev3", 00:07:39.571 "aliases": [ 00:07:39.571 "3828d38d-c5c0-489d-a4d5-5189a4f26b16" 00:07:39.571 ], 00:07:39.571 "product_name": "Malloc disk", 00:07:39.571 "block_size": 512, 00:07:39.571 "num_blocks": 65536, 00:07:39.571 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:39.571 "assigned_rate_limits": { 00:07:39.571 "rw_ios_per_sec": 0, 00:07:39.571 "rw_mbytes_per_sec": 0, 00:07:39.571 "r_mbytes_per_sec": 0, 00:07:39.571 "w_mbytes_per_sec": 0 00:07:39.571 }, 00:07:39.571 "claimed": false, 00:07:39.571 "zoned": false, 00:07:39.571 "supported_io_types": { 00:07:39.571 "read": true, 00:07:39.571 "write": true, 00:07:39.571 "unmap": true, 00:07:39.571 "flush": true, 00:07:39.571 "reset": true, 00:07:39.571 "nvme_admin": false, 00:07:39.571 "nvme_io": false, 00:07:39.571 "nvme_io_md": false, 00:07:39.571 "write_zeroes": true, 00:07:39.571 "zcopy": true, 00:07:39.571 "get_zone_info": false, 00:07:39.571 "zone_management": false, 00:07:39.571 "zone_append": false, 00:07:39.571 "compare": false, 00:07:39.571 "compare_and_write": false, 00:07:39.571 "abort": true, 00:07:39.571 "seek_hole": false, 00:07:39.571 "seek_data": false, 00:07:39.571 "copy": true, 00:07:39.571 "nvme_iov_md": false 00:07:39.571 }, 00:07:39.571 "memory_domains": [ 00:07:39.571 { 00:07:39.571 "dma_device_id": "system", 00:07:39.571 "dma_device_type": 1 00:07:39.571 }, 00:07:39.571 { 00:07:39.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.571 "dma_device_type": 2 00:07:39.571 } 00:07:39.571 ], 00:07:39.571 "driver_specific": {} 00:07:39.571 } 00:07:39.571 ] 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.571 [2024-12-07 01:51:44.960081] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.571 [2024-12-07 01:51:44.960206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.571 [2024-12-07 01:51:44.960245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.571 [2024-12-07 01:51:44.962046] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.571 01:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.571 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.571 "name": "Existed_Raid", 00:07:39.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.571 "strip_size_kb": 64, 00:07:39.571 "state": "configuring", 00:07:39.571 "raid_level": "raid0", 00:07:39.571 "superblock": false, 00:07:39.571 "num_base_bdevs": 3, 00:07:39.571 "num_base_bdevs_discovered": 2, 00:07:39.571 "num_base_bdevs_operational": 3, 00:07:39.571 "base_bdevs_list": [ 00:07:39.571 { 00:07:39.571 "name": "BaseBdev1", 00:07:39.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.571 "is_configured": false, 00:07:39.571 "data_offset": 0, 00:07:39.571 "data_size": 0 00:07:39.571 }, 00:07:39.571 { 00:07:39.571 "name": "BaseBdev2", 00:07:39.571 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:39.571 "is_configured": true, 00:07:39.571 "data_offset": 0, 00:07:39.571 "data_size": 65536 00:07:39.571 }, 00:07:39.571 { 00:07:39.571 "name": "BaseBdev3", 00:07:39.571 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:39.571 "is_configured": true, 00:07:39.571 "data_offset": 0, 00:07:39.571 "data_size": 65536 00:07:39.571 } 00:07:39.571 ] 00:07:39.571 }' 00:07:39.571 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.571 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.139 [2024-12-07 01:51:45.339417] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.139 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.140 "name": "Existed_Raid", 00:07:40.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.140 "strip_size_kb": 64, 00:07:40.140 "state": "configuring", 00:07:40.140 "raid_level": "raid0", 00:07:40.140 "superblock": false, 00:07:40.140 "num_base_bdevs": 3, 00:07:40.140 "num_base_bdevs_discovered": 1, 00:07:40.140 "num_base_bdevs_operational": 3, 00:07:40.140 "base_bdevs_list": [ 00:07:40.140 { 00:07:40.140 "name": "BaseBdev1", 00:07:40.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.140 "is_configured": false, 00:07:40.140 "data_offset": 0, 00:07:40.140 "data_size": 0 00:07:40.140 }, 00:07:40.140 { 00:07:40.140 "name": null, 00:07:40.140 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:40.140 "is_configured": false, 00:07:40.140 "data_offset": 0, 00:07:40.140 "data_size": 65536 00:07:40.140 }, 00:07:40.140 { 00:07:40.140 "name": "BaseBdev3", 00:07:40.140 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:40.140 "is_configured": true, 00:07:40.140 "data_offset": 0, 00:07:40.140 "data_size": 65536 00:07:40.140 } 00:07:40.140 ] 00:07:40.140 }' 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.140 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.400 [2024-12-07 01:51:45.849403] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.400 BaseBdev1 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.400 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.660 [ 00:07:40.660 { 00:07:40.660 "name": "BaseBdev1", 00:07:40.660 "aliases": [ 00:07:40.660 "c1f316d5-6e56-47d9-a069-846c7144caeb" 00:07:40.660 ], 00:07:40.660 "product_name": "Malloc disk", 00:07:40.660 "block_size": 512, 00:07:40.660 "num_blocks": 65536, 00:07:40.660 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:40.660 "assigned_rate_limits": { 00:07:40.660 "rw_ios_per_sec": 0, 00:07:40.660 "rw_mbytes_per_sec": 0, 00:07:40.660 "r_mbytes_per_sec": 0, 00:07:40.660 "w_mbytes_per_sec": 0 00:07:40.660 }, 00:07:40.660 "claimed": true, 00:07:40.660 "claim_type": "exclusive_write", 00:07:40.660 "zoned": false, 00:07:40.660 "supported_io_types": { 00:07:40.660 "read": true, 00:07:40.660 "write": true, 00:07:40.660 "unmap": true, 00:07:40.660 "flush": true, 00:07:40.660 "reset": true, 00:07:40.660 "nvme_admin": false, 00:07:40.660 "nvme_io": false, 00:07:40.660 "nvme_io_md": false, 00:07:40.660 "write_zeroes": true, 00:07:40.660 "zcopy": true, 00:07:40.660 "get_zone_info": false, 00:07:40.660 "zone_management": false, 00:07:40.660 "zone_append": false, 00:07:40.660 "compare": false, 00:07:40.660 "compare_and_write": false, 00:07:40.660 "abort": true, 00:07:40.660 "seek_hole": false, 00:07:40.660 "seek_data": false, 00:07:40.660 "copy": true, 00:07:40.660 "nvme_iov_md": false 00:07:40.660 }, 00:07:40.660 "memory_domains": [ 00:07:40.660 { 00:07:40.660 "dma_device_id": "system", 00:07:40.660 "dma_device_type": 1 00:07:40.660 }, 00:07:40.660 { 00:07:40.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.660 "dma_device_type": 2 00:07:40.660 } 00:07:40.660 ], 00:07:40.660 "driver_specific": {} 00:07:40.660 } 00:07:40.660 ] 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.660 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.660 "name": "Existed_Raid", 00:07:40.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:40.660 "strip_size_kb": 64, 00:07:40.660 "state": "configuring", 00:07:40.660 "raid_level": "raid0", 00:07:40.660 "superblock": false, 00:07:40.660 "num_base_bdevs": 3, 00:07:40.660 "num_base_bdevs_discovered": 2, 00:07:40.660 "num_base_bdevs_operational": 3, 00:07:40.660 "base_bdevs_list": [ 00:07:40.660 { 00:07:40.660 "name": "BaseBdev1", 00:07:40.660 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:40.660 "is_configured": true, 00:07:40.660 "data_offset": 0, 00:07:40.660 "data_size": 65536 00:07:40.660 }, 00:07:40.660 { 00:07:40.660 "name": null, 00:07:40.660 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:40.660 "is_configured": false, 00:07:40.660 "data_offset": 0, 00:07:40.660 "data_size": 65536 00:07:40.660 }, 00:07:40.660 { 00:07:40.660 "name": "BaseBdev3", 00:07:40.660 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:40.660 "is_configured": true, 00:07:40.660 "data_offset": 0, 00:07:40.660 "data_size": 65536 00:07:40.660 } 00:07:40.660 ] 00:07:40.660 }' 00:07:40.661 01:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.661 01:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.920 [2024-12-07 01:51:46.340578] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.920 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.181 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.181 "name": "Existed_Raid", 00:07:41.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.181 "strip_size_kb": 64, 00:07:41.181 "state": "configuring", 00:07:41.181 "raid_level": "raid0", 00:07:41.181 "superblock": false, 00:07:41.181 "num_base_bdevs": 3, 00:07:41.181 "num_base_bdevs_discovered": 1, 00:07:41.181 "num_base_bdevs_operational": 3, 00:07:41.181 "base_bdevs_list": [ 00:07:41.181 { 00:07:41.181 "name": "BaseBdev1", 00:07:41.181 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:41.181 "is_configured": true, 00:07:41.181 "data_offset": 0, 00:07:41.181 "data_size": 65536 00:07:41.181 }, 00:07:41.181 { 00:07:41.181 "name": null, 00:07:41.181 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:41.181 "is_configured": false, 00:07:41.181 "data_offset": 0, 00:07:41.181 "data_size": 65536 00:07:41.181 }, 00:07:41.181 { 00:07:41.181 "name": null, 00:07:41.181 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:41.181 "is_configured": false, 00:07:41.181 "data_offset": 0, 00:07:41.181 "data_size": 65536 00:07:41.181 } 00:07:41.181 ] 00:07:41.181 }' 00:07:41.181 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.181 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.441 [2024-12-07 01:51:46.791816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.441 "name": "Existed_Raid", 00:07:41.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.441 "strip_size_kb": 64, 00:07:41.441 "state": "configuring", 00:07:41.441 "raid_level": "raid0", 00:07:41.441 "superblock": false, 00:07:41.441 "num_base_bdevs": 3, 00:07:41.441 "num_base_bdevs_discovered": 2, 00:07:41.441 "num_base_bdevs_operational": 3, 00:07:41.441 "base_bdevs_list": [ 00:07:41.441 { 00:07:41.441 "name": "BaseBdev1", 00:07:41.441 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:41.441 "is_configured": true, 00:07:41.441 "data_offset": 0, 00:07:41.441 "data_size": 65536 00:07:41.441 }, 00:07:41.441 { 00:07:41.441 "name": null, 00:07:41.441 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:41.441 "is_configured": false, 00:07:41.441 "data_offset": 0, 00:07:41.441 "data_size": 65536 00:07:41.441 }, 00:07:41.441 { 00:07:41.441 "name": "BaseBdev3", 00:07:41.441 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:41.441 "is_configured": true, 00:07:41.441 "data_offset": 0, 00:07:41.441 "data_size": 65536 00:07:41.441 } 00:07:41.441 ] 00:07:41.441 }' 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.441 01:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.012 [2024-12-07 01:51:47.239073] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.012 "name": "Existed_Raid", 00:07:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.012 "strip_size_kb": 64, 00:07:42.012 "state": "configuring", 00:07:42.012 "raid_level": "raid0", 00:07:42.012 "superblock": false, 00:07:42.012 "num_base_bdevs": 3, 00:07:42.012 "num_base_bdevs_discovered": 1, 00:07:42.012 "num_base_bdevs_operational": 3, 00:07:42.012 "base_bdevs_list": [ 00:07:42.012 { 00:07:42.012 "name": null, 00:07:42.012 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:42.012 "is_configured": false, 00:07:42.012 "data_offset": 0, 00:07:42.012 "data_size": 65536 00:07:42.012 }, 00:07:42.012 { 00:07:42.012 "name": null, 00:07:42.012 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:42.012 "is_configured": false, 00:07:42.012 "data_offset": 0, 00:07:42.012 "data_size": 65536 00:07:42.012 }, 00:07:42.012 { 00:07:42.012 "name": "BaseBdev3", 00:07:42.012 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:42.012 "is_configured": true, 00:07:42.012 "data_offset": 0, 00:07:42.012 "data_size": 65536 00:07:42.012 } 00:07:42.012 ] 00:07:42.012 }' 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.012 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.272 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.272 [2024-12-07 01:51:47.728555] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.531 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.532 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.532 "name": "Existed_Raid", 00:07:42.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.532 "strip_size_kb": 64, 00:07:42.532 "state": "configuring", 00:07:42.532 "raid_level": "raid0", 00:07:42.532 "superblock": false, 00:07:42.532 "num_base_bdevs": 3, 00:07:42.532 "num_base_bdevs_discovered": 2, 00:07:42.532 "num_base_bdevs_operational": 3, 00:07:42.532 "base_bdevs_list": [ 00:07:42.532 { 00:07:42.532 "name": null, 00:07:42.532 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:42.532 "is_configured": false, 00:07:42.532 "data_offset": 0, 00:07:42.532 "data_size": 65536 00:07:42.532 }, 00:07:42.532 { 00:07:42.532 "name": "BaseBdev2", 00:07:42.532 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:42.532 "is_configured": true, 00:07:42.532 "data_offset": 0, 00:07:42.532 "data_size": 65536 00:07:42.532 }, 00:07:42.532 { 00:07:42.532 "name": "BaseBdev3", 00:07:42.532 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:42.532 "is_configured": true, 00:07:42.532 "data_offset": 0, 00:07:42.532 "data_size": 65536 00:07:42.532 } 00:07:42.532 ] 00:07:42.532 }' 00:07:42.532 01:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.532 01:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.791 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.791 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.791 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.791 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.792 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c1f316d5-6e56-47d9-a069-846c7144caeb 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 [2024-12-07 01:51:48.290330] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:43.051 [2024-12-07 01:51:48.290434] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:43.051 [2024-12-07 01:51:48.290460] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:43.051 [2024-12-07 01:51:48.290756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:43.051 [2024-12-07 01:51:48.290925] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:43.051 [2024-12-07 01:51:48.290966] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:43.051 [2024-12-07 01:51:48.291177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:43.051 NewBaseBdev 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.051 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.052 [ 00:07:43.052 { 00:07:43.052 "name": "NewBaseBdev", 00:07:43.052 "aliases": [ 00:07:43.052 "c1f316d5-6e56-47d9-a069-846c7144caeb" 00:07:43.052 ], 00:07:43.052 "product_name": "Malloc disk", 00:07:43.052 "block_size": 512, 00:07:43.052 "num_blocks": 65536, 00:07:43.052 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:43.052 "assigned_rate_limits": { 00:07:43.052 "rw_ios_per_sec": 0, 00:07:43.052 "rw_mbytes_per_sec": 0, 00:07:43.052 "r_mbytes_per_sec": 0, 00:07:43.052 "w_mbytes_per_sec": 0 00:07:43.052 }, 00:07:43.052 "claimed": true, 00:07:43.052 "claim_type": "exclusive_write", 00:07:43.052 "zoned": false, 00:07:43.052 "supported_io_types": { 00:07:43.052 "read": true, 00:07:43.052 "write": true, 00:07:43.052 "unmap": true, 00:07:43.052 "flush": true, 00:07:43.052 "reset": true, 00:07:43.052 "nvme_admin": false, 00:07:43.052 "nvme_io": false, 00:07:43.052 "nvme_io_md": false, 00:07:43.052 "write_zeroes": true, 00:07:43.052 "zcopy": true, 00:07:43.052 "get_zone_info": false, 00:07:43.052 "zone_management": false, 00:07:43.052 "zone_append": false, 00:07:43.052 "compare": false, 00:07:43.052 "compare_and_write": false, 00:07:43.052 "abort": true, 00:07:43.052 "seek_hole": false, 00:07:43.052 "seek_data": false, 00:07:43.052 "copy": true, 00:07:43.052 "nvme_iov_md": false 00:07:43.052 }, 00:07:43.052 "memory_domains": [ 00:07:43.052 { 00:07:43.052 "dma_device_id": "system", 00:07:43.052 "dma_device_type": 1 00:07:43.052 }, 00:07:43.052 { 00:07:43.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.052 "dma_device_type": 2 00:07:43.052 } 00:07:43.052 ], 00:07:43.052 "driver_specific": {} 00:07:43.052 } 00:07:43.052 ] 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.052 "name": "Existed_Raid", 00:07:43.052 "uuid": "020bf41c-7f71-44be-b2d6-9f2057fb531c", 00:07:43.052 "strip_size_kb": 64, 00:07:43.052 "state": "online", 00:07:43.052 "raid_level": "raid0", 00:07:43.052 "superblock": false, 00:07:43.052 "num_base_bdevs": 3, 00:07:43.052 "num_base_bdevs_discovered": 3, 00:07:43.052 "num_base_bdevs_operational": 3, 00:07:43.052 "base_bdevs_list": [ 00:07:43.052 { 00:07:43.052 "name": "NewBaseBdev", 00:07:43.052 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:43.052 "is_configured": true, 00:07:43.052 "data_offset": 0, 00:07:43.052 "data_size": 65536 00:07:43.052 }, 00:07:43.052 { 00:07:43.052 "name": "BaseBdev2", 00:07:43.052 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:43.052 "is_configured": true, 00:07:43.052 "data_offset": 0, 00:07:43.052 "data_size": 65536 00:07:43.052 }, 00:07:43.052 { 00:07:43.052 "name": "BaseBdev3", 00:07:43.052 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:43.052 "is_configured": true, 00:07:43.052 "data_offset": 0, 00:07:43.052 "data_size": 65536 00:07:43.052 } 00:07:43.052 ] 00:07:43.052 }' 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.052 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.312 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.312 [2024-12-07 01:51:48.765888] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.572 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.572 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.572 "name": "Existed_Raid", 00:07:43.572 "aliases": [ 00:07:43.572 "020bf41c-7f71-44be-b2d6-9f2057fb531c" 00:07:43.572 ], 00:07:43.572 "product_name": "Raid Volume", 00:07:43.572 "block_size": 512, 00:07:43.572 "num_blocks": 196608, 00:07:43.572 "uuid": "020bf41c-7f71-44be-b2d6-9f2057fb531c", 00:07:43.572 "assigned_rate_limits": { 00:07:43.572 "rw_ios_per_sec": 0, 00:07:43.572 "rw_mbytes_per_sec": 0, 00:07:43.572 "r_mbytes_per_sec": 0, 00:07:43.572 "w_mbytes_per_sec": 0 00:07:43.572 }, 00:07:43.572 "claimed": false, 00:07:43.572 "zoned": false, 00:07:43.572 "supported_io_types": { 00:07:43.572 "read": true, 00:07:43.572 "write": true, 00:07:43.572 "unmap": true, 00:07:43.572 "flush": true, 00:07:43.572 "reset": true, 00:07:43.572 "nvme_admin": false, 00:07:43.572 "nvme_io": false, 00:07:43.572 "nvme_io_md": false, 00:07:43.572 "write_zeroes": true, 00:07:43.572 "zcopy": false, 00:07:43.572 "get_zone_info": false, 00:07:43.572 "zone_management": false, 00:07:43.572 "zone_append": false, 00:07:43.572 "compare": false, 00:07:43.572 "compare_and_write": false, 00:07:43.572 "abort": false, 00:07:43.572 "seek_hole": false, 00:07:43.572 "seek_data": false, 00:07:43.572 "copy": false, 00:07:43.572 "nvme_iov_md": false 00:07:43.572 }, 00:07:43.572 "memory_domains": [ 00:07:43.572 { 00:07:43.572 "dma_device_id": "system", 00:07:43.572 "dma_device_type": 1 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.572 "dma_device_type": 2 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "dma_device_id": "system", 00:07:43.572 "dma_device_type": 1 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.572 "dma_device_type": 2 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "dma_device_id": "system", 00:07:43.572 "dma_device_type": 1 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.572 "dma_device_type": 2 00:07:43.572 } 00:07:43.572 ], 00:07:43.572 "driver_specific": { 00:07:43.572 "raid": { 00:07:43.572 "uuid": "020bf41c-7f71-44be-b2d6-9f2057fb531c", 00:07:43.572 "strip_size_kb": 64, 00:07:43.572 "state": "online", 00:07:43.572 "raid_level": "raid0", 00:07:43.572 "superblock": false, 00:07:43.572 "num_base_bdevs": 3, 00:07:43.572 "num_base_bdevs_discovered": 3, 00:07:43.572 "num_base_bdevs_operational": 3, 00:07:43.572 "base_bdevs_list": [ 00:07:43.572 { 00:07:43.572 "name": "NewBaseBdev", 00:07:43.572 "uuid": "c1f316d5-6e56-47d9-a069-846c7144caeb", 00:07:43.572 "is_configured": true, 00:07:43.572 "data_offset": 0, 00:07:43.572 "data_size": 65536 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "name": "BaseBdev2", 00:07:43.572 "uuid": "ad0a77fd-7b13-4f2d-aa0b-04344149f131", 00:07:43.572 "is_configured": true, 00:07:43.572 "data_offset": 0, 00:07:43.572 "data_size": 65536 00:07:43.572 }, 00:07:43.572 { 00:07:43.572 "name": "BaseBdev3", 00:07:43.572 "uuid": "3828d38d-c5c0-489d-a4d5-5189a4f26b16", 00:07:43.572 "is_configured": true, 00:07:43.572 "data_offset": 0, 00:07:43.572 "data_size": 65536 00:07:43.572 } 00:07:43.572 ] 00:07:43.572 } 00:07:43.572 } 00:07:43.573 }' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:43.573 BaseBdev2 00:07:43.573 BaseBdev3' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.573 01:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.573 [2024-12-07 01:51:49.021116] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.573 [2024-12-07 01:51:49.021145] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.573 [2024-12-07 01:51:49.021207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.573 [2024-12-07 01:51:49.021255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.573 [2024-12-07 01:51:49.021267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74771 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 74771 ']' 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 74771 00:07:43.573 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74771 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.833 killing process with pid 74771 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74771' 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 74771 00:07:43.833 [2024-12-07 01:51:49.071052] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.833 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 74771 00:07:43.833 [2024-12-07 01:51:49.102084] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.094 00:07:44.094 real 0m8.690s 00:07:44.094 user 0m14.788s 00:07:44.094 sys 0m1.805s 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.094 ************************************ 00:07:44.094 END TEST raid_state_function_test 00:07:44.094 ************************************ 00:07:44.094 01:51:49 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:44.094 01:51:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:44.094 01:51:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.094 01:51:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.094 ************************************ 00:07:44.094 START TEST raid_state_function_test_sb 00:07:44.094 ************************************ 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75375 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75375' 00:07:44.094 Process raid pid: 75375 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75375 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75375 ']' 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.094 01:51:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.094 [2024-12-07 01:51:49.506375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.094 [2024-12-07 01:51:49.506505] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.354 [2024-12-07 01:51:49.651825] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.354 [2024-12-07 01:51:49.697205] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.354 [2024-12-07 01:51:49.737987] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.354 [2024-12-07 01:51:49.738027] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.923 [2024-12-07 01:51:50.327045] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.923 [2024-12-07 01:51:50.327091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.923 [2024-12-07 01:51:50.327131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.923 [2024-12-07 01:51:50.327142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.923 [2024-12-07 01:51:50.327148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:44.923 [2024-12-07 01:51:50.327159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.923 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.182 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.182 "name": "Existed_Raid", 00:07:45.183 "uuid": "5287c811-f71f-405c-b2ee-d711725d6f85", 00:07:45.183 "strip_size_kb": 64, 00:07:45.183 "state": "configuring", 00:07:45.183 "raid_level": "raid0", 00:07:45.183 "superblock": true, 00:07:45.183 "num_base_bdevs": 3, 00:07:45.183 "num_base_bdevs_discovered": 0, 00:07:45.183 "num_base_bdevs_operational": 3, 00:07:45.183 "base_bdevs_list": [ 00:07:45.183 { 00:07:45.183 "name": "BaseBdev1", 00:07:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.183 "is_configured": false, 00:07:45.183 "data_offset": 0, 00:07:45.183 "data_size": 0 00:07:45.183 }, 00:07:45.183 { 00:07:45.183 "name": "BaseBdev2", 00:07:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.183 "is_configured": false, 00:07:45.183 "data_offset": 0, 00:07:45.183 "data_size": 0 00:07:45.183 }, 00:07:45.183 { 00:07:45.183 "name": "BaseBdev3", 00:07:45.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.183 "is_configured": false, 00:07:45.183 "data_offset": 0, 00:07:45.183 "data_size": 0 00:07:45.183 } 00:07:45.183 ] 00:07:45.183 }' 00:07:45.183 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.183 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 [2024-12-07 01:51:50.786142] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.444 [2024-12-07 01:51:50.786183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 [2024-12-07 01:51:50.794149] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.444 [2024-12-07 01:51:50.794190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.444 [2024-12-07 01:51:50.794198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.444 [2024-12-07 01:51:50.794206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.444 [2024-12-07 01:51:50.794212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:45.444 [2024-12-07 01:51:50.794220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 [2024-12-07 01:51:50.810905] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.444 BaseBdev1 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 [ 00:07:45.444 { 00:07:45.444 "name": "BaseBdev1", 00:07:45.444 "aliases": [ 00:07:45.444 "52596697-eceb-41bb-99ba-18280cee1f1f" 00:07:45.444 ], 00:07:45.444 "product_name": "Malloc disk", 00:07:45.444 "block_size": 512, 00:07:45.444 "num_blocks": 65536, 00:07:45.444 "uuid": "52596697-eceb-41bb-99ba-18280cee1f1f", 00:07:45.444 "assigned_rate_limits": { 00:07:45.444 "rw_ios_per_sec": 0, 00:07:45.444 "rw_mbytes_per_sec": 0, 00:07:45.444 "r_mbytes_per_sec": 0, 00:07:45.444 "w_mbytes_per_sec": 0 00:07:45.444 }, 00:07:45.444 "claimed": true, 00:07:45.444 "claim_type": "exclusive_write", 00:07:45.444 "zoned": false, 00:07:45.444 "supported_io_types": { 00:07:45.444 "read": true, 00:07:45.444 "write": true, 00:07:45.444 "unmap": true, 00:07:45.444 "flush": true, 00:07:45.444 "reset": true, 00:07:45.444 "nvme_admin": false, 00:07:45.444 "nvme_io": false, 00:07:45.444 "nvme_io_md": false, 00:07:45.444 "write_zeroes": true, 00:07:45.444 "zcopy": true, 00:07:45.444 "get_zone_info": false, 00:07:45.444 "zone_management": false, 00:07:45.444 "zone_append": false, 00:07:45.444 "compare": false, 00:07:45.444 "compare_and_write": false, 00:07:45.444 "abort": true, 00:07:45.444 "seek_hole": false, 00:07:45.444 "seek_data": false, 00:07:45.444 "copy": true, 00:07:45.444 "nvme_iov_md": false 00:07:45.444 }, 00:07:45.444 "memory_domains": [ 00:07:45.444 { 00:07:45.444 "dma_device_id": "system", 00:07:45.444 "dma_device_type": 1 00:07:45.444 }, 00:07:45.444 { 00:07:45.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.444 "dma_device_type": 2 00:07:45.444 } 00:07:45.444 ], 00:07:45.444 "driver_specific": {} 00:07:45.444 } 00:07:45.444 ] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.444 "name": "Existed_Raid", 00:07:45.444 "uuid": "6ceffc65-5b18-479d-8e45-4a016ab6270e", 00:07:45.444 "strip_size_kb": 64, 00:07:45.444 "state": "configuring", 00:07:45.444 "raid_level": "raid0", 00:07:45.444 "superblock": true, 00:07:45.444 "num_base_bdevs": 3, 00:07:45.444 "num_base_bdevs_discovered": 1, 00:07:45.444 "num_base_bdevs_operational": 3, 00:07:45.444 "base_bdevs_list": [ 00:07:45.444 { 00:07:45.444 "name": "BaseBdev1", 00:07:45.444 "uuid": "52596697-eceb-41bb-99ba-18280cee1f1f", 00:07:45.444 "is_configured": true, 00:07:45.444 "data_offset": 2048, 00:07:45.444 "data_size": 63488 00:07:45.444 }, 00:07:45.444 { 00:07:45.444 "name": "BaseBdev2", 00:07:45.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.444 "is_configured": false, 00:07:45.444 "data_offset": 0, 00:07:45.444 "data_size": 0 00:07:45.444 }, 00:07:45.444 { 00:07:45.444 "name": "BaseBdev3", 00:07:45.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:45.444 "is_configured": false, 00:07:45.444 "data_offset": 0, 00:07:45.444 "data_size": 0 00:07:45.444 } 00:07:45.444 ] 00:07:45.444 }' 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.444 01:51:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.015 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:46.015 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.015 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.015 [2024-12-07 01:51:51.274148] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.016 [2024-12-07 01:51:51.274212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.016 [2024-12-07 01:51:51.286171] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:46.016 [2024-12-07 01:51:51.288082] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:46.016 [2024-12-07 01:51:51.288139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:46.016 [2024-12-07 01:51:51.288148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:46.016 [2024-12-07 01:51:51.288158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.016 "name": "Existed_Raid", 00:07:46.016 "uuid": "59618daf-1473-46eb-93aa-e8afe323c348", 00:07:46.016 "strip_size_kb": 64, 00:07:46.016 "state": "configuring", 00:07:46.016 "raid_level": "raid0", 00:07:46.016 "superblock": true, 00:07:46.016 "num_base_bdevs": 3, 00:07:46.016 "num_base_bdevs_discovered": 1, 00:07:46.016 "num_base_bdevs_operational": 3, 00:07:46.016 "base_bdevs_list": [ 00:07:46.016 { 00:07:46.016 "name": "BaseBdev1", 00:07:46.016 "uuid": "52596697-eceb-41bb-99ba-18280cee1f1f", 00:07:46.016 "is_configured": true, 00:07:46.016 "data_offset": 2048, 00:07:46.016 "data_size": 63488 00:07:46.016 }, 00:07:46.016 { 00:07:46.016 "name": "BaseBdev2", 00:07:46.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.016 "is_configured": false, 00:07:46.016 "data_offset": 0, 00:07:46.016 "data_size": 0 00:07:46.016 }, 00:07:46.016 { 00:07:46.016 "name": "BaseBdev3", 00:07:46.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.016 "is_configured": false, 00:07:46.016 "data_offset": 0, 00:07:46.016 "data_size": 0 00:07:46.016 } 00:07:46.016 ] 00:07:46.016 }' 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.016 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 [2024-12-07 01:51:51.767299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:46.589 BaseBdev2 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.589 [ 00:07:46.589 { 00:07:46.589 "name": "BaseBdev2", 00:07:46.589 "aliases": [ 00:07:46.589 "3bdd05bd-d6e7-4530-bc27-2ed34ba1121f" 00:07:46.589 ], 00:07:46.589 "product_name": "Malloc disk", 00:07:46.589 "block_size": 512, 00:07:46.589 "num_blocks": 65536, 00:07:46.589 "uuid": "3bdd05bd-d6e7-4530-bc27-2ed34ba1121f", 00:07:46.589 "assigned_rate_limits": { 00:07:46.589 "rw_ios_per_sec": 0, 00:07:46.589 "rw_mbytes_per_sec": 0, 00:07:46.589 "r_mbytes_per_sec": 0, 00:07:46.589 "w_mbytes_per_sec": 0 00:07:46.589 }, 00:07:46.589 "claimed": true, 00:07:46.589 "claim_type": "exclusive_write", 00:07:46.589 "zoned": false, 00:07:46.589 "supported_io_types": { 00:07:46.589 "read": true, 00:07:46.589 "write": true, 00:07:46.589 "unmap": true, 00:07:46.589 "flush": true, 00:07:46.589 "reset": true, 00:07:46.589 "nvme_admin": false, 00:07:46.589 "nvme_io": false, 00:07:46.589 "nvme_io_md": false, 00:07:46.589 "write_zeroes": true, 00:07:46.589 "zcopy": true, 00:07:46.589 "get_zone_info": false, 00:07:46.589 "zone_management": false, 00:07:46.589 "zone_append": false, 00:07:46.589 "compare": false, 00:07:46.589 "compare_and_write": false, 00:07:46.589 "abort": true, 00:07:46.589 "seek_hole": false, 00:07:46.589 "seek_data": false, 00:07:46.589 "copy": true, 00:07:46.589 "nvme_iov_md": false 00:07:46.589 }, 00:07:46.589 "memory_domains": [ 00:07:46.589 { 00:07:46.589 "dma_device_id": "system", 00:07:46.589 "dma_device_type": 1 00:07:46.589 }, 00:07:46.589 { 00:07:46.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.589 "dma_device_type": 2 00:07:46.589 } 00:07:46.589 ], 00:07:46.589 "driver_specific": {} 00:07:46.589 } 00:07:46.589 ] 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.589 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.590 "name": "Existed_Raid", 00:07:46.590 "uuid": "59618daf-1473-46eb-93aa-e8afe323c348", 00:07:46.590 "strip_size_kb": 64, 00:07:46.590 "state": "configuring", 00:07:46.590 "raid_level": "raid0", 00:07:46.590 "superblock": true, 00:07:46.590 "num_base_bdevs": 3, 00:07:46.590 "num_base_bdevs_discovered": 2, 00:07:46.590 "num_base_bdevs_operational": 3, 00:07:46.590 "base_bdevs_list": [ 00:07:46.590 { 00:07:46.590 "name": "BaseBdev1", 00:07:46.590 "uuid": "52596697-eceb-41bb-99ba-18280cee1f1f", 00:07:46.590 "is_configured": true, 00:07:46.590 "data_offset": 2048, 00:07:46.590 "data_size": 63488 00:07:46.590 }, 00:07:46.590 { 00:07:46.590 "name": "BaseBdev2", 00:07:46.590 "uuid": "3bdd05bd-d6e7-4530-bc27-2ed34ba1121f", 00:07:46.590 "is_configured": true, 00:07:46.590 "data_offset": 2048, 00:07:46.590 "data_size": 63488 00:07:46.590 }, 00:07:46.590 { 00:07:46.590 "name": "BaseBdev3", 00:07:46.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.590 "is_configured": false, 00:07:46.590 "data_offset": 0, 00:07:46.590 "data_size": 0 00:07:46.590 } 00:07:46.590 ] 00:07:46.590 }' 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.590 01:51:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.849 [2024-12-07 01:51:52.229344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:46.849 [2024-12-07 01:51:52.229537] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:46.849 [2024-12-07 01:51:52.229555] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:46.849 BaseBdev3 00:07:46.849 [2024-12-07 01:51:52.229889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:46.849 [2024-12-07 01:51:52.230023] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:46.849 [2024-12-07 01:51:52.230042] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:46.849 [2024-12-07 01:51:52.230171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.849 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.850 [ 00:07:46.850 { 00:07:46.850 "name": "BaseBdev3", 00:07:46.850 "aliases": [ 00:07:46.850 "d8aae699-850c-48f7-8a87-25077513bf90" 00:07:46.850 ], 00:07:46.850 "product_name": "Malloc disk", 00:07:46.850 "block_size": 512, 00:07:46.850 "num_blocks": 65536, 00:07:46.850 "uuid": "d8aae699-850c-48f7-8a87-25077513bf90", 00:07:46.850 "assigned_rate_limits": { 00:07:46.850 "rw_ios_per_sec": 0, 00:07:46.850 "rw_mbytes_per_sec": 0, 00:07:46.850 "r_mbytes_per_sec": 0, 00:07:46.850 "w_mbytes_per_sec": 0 00:07:46.850 }, 00:07:46.850 "claimed": true, 00:07:46.850 "claim_type": "exclusive_write", 00:07:46.850 "zoned": false, 00:07:46.850 "supported_io_types": { 00:07:46.850 "read": true, 00:07:46.850 "write": true, 00:07:46.850 "unmap": true, 00:07:46.850 "flush": true, 00:07:46.850 "reset": true, 00:07:46.850 "nvme_admin": false, 00:07:46.850 "nvme_io": false, 00:07:46.850 "nvme_io_md": false, 00:07:46.850 "write_zeroes": true, 00:07:46.850 "zcopy": true, 00:07:46.850 "get_zone_info": false, 00:07:46.850 "zone_management": false, 00:07:46.850 "zone_append": false, 00:07:46.850 "compare": false, 00:07:46.850 "compare_and_write": false, 00:07:46.850 "abort": true, 00:07:46.850 "seek_hole": false, 00:07:46.850 "seek_data": false, 00:07:46.850 "copy": true, 00:07:46.850 "nvme_iov_md": false 00:07:46.850 }, 00:07:46.850 "memory_domains": [ 00:07:46.850 { 00:07:46.850 "dma_device_id": "system", 00:07:46.850 "dma_device_type": 1 00:07:46.850 }, 00:07:46.850 { 00:07:46.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.850 "dma_device_type": 2 00:07:46.850 } 00:07:46.850 ], 00:07:46.850 "driver_specific": {} 00:07:46.850 } 00:07:46.850 ] 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.850 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.110 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.110 "name": "Existed_Raid", 00:07:47.110 "uuid": "59618daf-1473-46eb-93aa-e8afe323c348", 00:07:47.110 "strip_size_kb": 64, 00:07:47.110 "state": "online", 00:07:47.110 "raid_level": "raid0", 00:07:47.110 "superblock": true, 00:07:47.110 "num_base_bdevs": 3, 00:07:47.110 "num_base_bdevs_discovered": 3, 00:07:47.110 "num_base_bdevs_operational": 3, 00:07:47.110 "base_bdevs_list": [ 00:07:47.110 { 00:07:47.110 "name": "BaseBdev1", 00:07:47.110 "uuid": "52596697-eceb-41bb-99ba-18280cee1f1f", 00:07:47.110 "is_configured": true, 00:07:47.110 "data_offset": 2048, 00:07:47.110 "data_size": 63488 00:07:47.110 }, 00:07:47.110 { 00:07:47.110 "name": "BaseBdev2", 00:07:47.110 "uuid": "3bdd05bd-d6e7-4530-bc27-2ed34ba1121f", 00:07:47.110 "is_configured": true, 00:07:47.110 "data_offset": 2048, 00:07:47.110 "data_size": 63488 00:07:47.110 }, 00:07:47.110 { 00:07:47.110 "name": "BaseBdev3", 00:07:47.110 "uuid": "d8aae699-850c-48f7-8a87-25077513bf90", 00:07:47.110 "is_configured": true, 00:07:47.110 "data_offset": 2048, 00:07:47.110 "data_size": 63488 00:07:47.110 } 00:07:47.110 ] 00:07:47.110 }' 00:07:47.110 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.110 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.370 [2024-12-07 01:51:52.700865] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.370 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.370 "name": "Existed_Raid", 00:07:47.370 "aliases": [ 00:07:47.370 "59618daf-1473-46eb-93aa-e8afe323c348" 00:07:47.370 ], 00:07:47.370 "product_name": "Raid Volume", 00:07:47.370 "block_size": 512, 00:07:47.370 "num_blocks": 190464, 00:07:47.370 "uuid": "59618daf-1473-46eb-93aa-e8afe323c348", 00:07:47.370 "assigned_rate_limits": { 00:07:47.370 "rw_ios_per_sec": 0, 00:07:47.370 "rw_mbytes_per_sec": 0, 00:07:47.370 "r_mbytes_per_sec": 0, 00:07:47.370 "w_mbytes_per_sec": 0 00:07:47.370 }, 00:07:47.370 "claimed": false, 00:07:47.370 "zoned": false, 00:07:47.370 "supported_io_types": { 00:07:47.370 "read": true, 00:07:47.370 "write": true, 00:07:47.370 "unmap": true, 00:07:47.370 "flush": true, 00:07:47.370 "reset": true, 00:07:47.370 "nvme_admin": false, 00:07:47.370 "nvme_io": false, 00:07:47.370 "nvme_io_md": false, 00:07:47.370 "write_zeroes": true, 00:07:47.370 "zcopy": false, 00:07:47.370 "get_zone_info": false, 00:07:47.370 "zone_management": false, 00:07:47.370 "zone_append": false, 00:07:47.370 "compare": false, 00:07:47.370 "compare_and_write": false, 00:07:47.370 "abort": false, 00:07:47.370 "seek_hole": false, 00:07:47.370 "seek_data": false, 00:07:47.370 "copy": false, 00:07:47.370 "nvme_iov_md": false 00:07:47.370 }, 00:07:47.370 "memory_domains": [ 00:07:47.370 { 00:07:47.370 "dma_device_id": "system", 00:07:47.370 "dma_device_type": 1 00:07:47.370 }, 00:07:47.370 { 00:07:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.370 "dma_device_type": 2 00:07:47.370 }, 00:07:47.370 { 00:07:47.370 "dma_device_id": "system", 00:07:47.370 "dma_device_type": 1 00:07:47.370 }, 00:07:47.370 { 00:07:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.370 "dma_device_type": 2 00:07:47.370 }, 00:07:47.370 { 00:07:47.370 "dma_device_id": "system", 00:07:47.370 "dma_device_type": 1 00:07:47.370 }, 00:07:47.370 { 00:07:47.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.371 "dma_device_type": 2 00:07:47.371 } 00:07:47.371 ], 00:07:47.371 "driver_specific": { 00:07:47.371 "raid": { 00:07:47.371 "uuid": "59618daf-1473-46eb-93aa-e8afe323c348", 00:07:47.371 "strip_size_kb": 64, 00:07:47.371 "state": "online", 00:07:47.371 "raid_level": "raid0", 00:07:47.371 "superblock": true, 00:07:47.371 "num_base_bdevs": 3, 00:07:47.371 "num_base_bdevs_discovered": 3, 00:07:47.371 "num_base_bdevs_operational": 3, 00:07:47.371 "base_bdevs_list": [ 00:07:47.371 { 00:07:47.371 "name": "BaseBdev1", 00:07:47.371 "uuid": "52596697-eceb-41bb-99ba-18280cee1f1f", 00:07:47.371 "is_configured": true, 00:07:47.371 "data_offset": 2048, 00:07:47.371 "data_size": 63488 00:07:47.371 }, 00:07:47.371 { 00:07:47.371 "name": "BaseBdev2", 00:07:47.371 "uuid": "3bdd05bd-d6e7-4530-bc27-2ed34ba1121f", 00:07:47.371 "is_configured": true, 00:07:47.371 "data_offset": 2048, 00:07:47.371 "data_size": 63488 00:07:47.371 }, 00:07:47.371 { 00:07:47.371 "name": "BaseBdev3", 00:07:47.371 "uuid": "d8aae699-850c-48f7-8a87-25077513bf90", 00:07:47.371 "is_configured": true, 00:07:47.371 "data_offset": 2048, 00:07:47.371 "data_size": 63488 00:07:47.371 } 00:07:47.371 ] 00:07:47.371 } 00:07:47.371 } 00:07:47.371 }' 00:07:47.371 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.371 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:47.371 BaseBdev2 00:07:47.371 BaseBdev3' 00:07:47.371 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.631 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.632 01:51:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.632 [2024-12-07 01:51:52.992147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:47.632 [2024-12-07 01:51:52.992184] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:47.632 [2024-12-07 01:51:52.992238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.632 "name": "Existed_Raid", 00:07:47.632 "uuid": "59618daf-1473-46eb-93aa-e8afe323c348", 00:07:47.632 "strip_size_kb": 64, 00:07:47.632 "state": "offline", 00:07:47.632 "raid_level": "raid0", 00:07:47.632 "superblock": true, 00:07:47.632 "num_base_bdevs": 3, 00:07:47.632 "num_base_bdevs_discovered": 2, 00:07:47.632 "num_base_bdevs_operational": 2, 00:07:47.632 "base_bdevs_list": [ 00:07:47.632 { 00:07:47.632 "name": null, 00:07:47.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.632 "is_configured": false, 00:07:47.632 "data_offset": 0, 00:07:47.632 "data_size": 63488 00:07:47.632 }, 00:07:47.632 { 00:07:47.632 "name": "BaseBdev2", 00:07:47.632 "uuid": "3bdd05bd-d6e7-4530-bc27-2ed34ba1121f", 00:07:47.632 "is_configured": true, 00:07:47.632 "data_offset": 2048, 00:07:47.632 "data_size": 63488 00:07:47.632 }, 00:07:47.632 { 00:07:47.632 "name": "BaseBdev3", 00:07:47.632 "uuid": "d8aae699-850c-48f7-8a87-25077513bf90", 00:07:47.632 "is_configured": true, 00:07:47.632 "data_offset": 2048, 00:07:47.632 "data_size": 63488 00:07:47.632 } 00:07:47.632 ] 00:07:47.632 }' 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.632 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 [2024-12-07 01:51:53.478655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 [2024-12-07 01:51:53.545729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:48.202 [2024-12-07 01:51:53.545779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:48.202 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.203 BaseBdev2 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.203 [ 00:07:48.203 { 00:07:48.203 "name": "BaseBdev2", 00:07:48.203 "aliases": [ 00:07:48.203 "84e255dd-5901-4d5f-a7d6-59f95c11897c" 00:07:48.203 ], 00:07:48.203 "product_name": "Malloc disk", 00:07:48.203 "block_size": 512, 00:07:48.203 "num_blocks": 65536, 00:07:48.203 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:48.203 "assigned_rate_limits": { 00:07:48.203 "rw_ios_per_sec": 0, 00:07:48.203 "rw_mbytes_per_sec": 0, 00:07:48.203 "r_mbytes_per_sec": 0, 00:07:48.203 "w_mbytes_per_sec": 0 00:07:48.203 }, 00:07:48.203 "claimed": false, 00:07:48.203 "zoned": false, 00:07:48.203 "supported_io_types": { 00:07:48.203 "read": true, 00:07:48.203 "write": true, 00:07:48.203 "unmap": true, 00:07:48.203 "flush": true, 00:07:48.203 "reset": true, 00:07:48.203 "nvme_admin": false, 00:07:48.203 "nvme_io": false, 00:07:48.203 "nvme_io_md": false, 00:07:48.203 "write_zeroes": true, 00:07:48.203 "zcopy": true, 00:07:48.203 "get_zone_info": false, 00:07:48.203 "zone_management": false, 00:07:48.203 "zone_append": false, 00:07:48.203 "compare": false, 00:07:48.203 "compare_and_write": false, 00:07:48.203 "abort": true, 00:07:48.203 "seek_hole": false, 00:07:48.203 "seek_data": false, 00:07:48.203 "copy": true, 00:07:48.203 "nvme_iov_md": false 00:07:48.203 }, 00:07:48.203 "memory_domains": [ 00:07:48.203 { 00:07:48.203 "dma_device_id": "system", 00:07:48.203 "dma_device_type": 1 00:07:48.203 }, 00:07:48.203 { 00:07:48.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.203 "dma_device_type": 2 00:07:48.203 } 00:07:48.203 ], 00:07:48.203 "driver_specific": {} 00:07:48.203 } 00:07:48.203 ] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.203 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 BaseBdev3 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 [ 00:07:48.464 { 00:07:48.464 "name": "BaseBdev3", 00:07:48.464 "aliases": [ 00:07:48.464 "0363ac19-d254-4af8-bd19-729da8978aaa" 00:07:48.464 ], 00:07:48.464 "product_name": "Malloc disk", 00:07:48.464 "block_size": 512, 00:07:48.464 "num_blocks": 65536, 00:07:48.464 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:48.464 "assigned_rate_limits": { 00:07:48.464 "rw_ios_per_sec": 0, 00:07:48.464 "rw_mbytes_per_sec": 0, 00:07:48.464 "r_mbytes_per_sec": 0, 00:07:48.464 "w_mbytes_per_sec": 0 00:07:48.464 }, 00:07:48.464 "claimed": false, 00:07:48.464 "zoned": false, 00:07:48.464 "supported_io_types": { 00:07:48.464 "read": true, 00:07:48.464 "write": true, 00:07:48.464 "unmap": true, 00:07:48.464 "flush": true, 00:07:48.464 "reset": true, 00:07:48.464 "nvme_admin": false, 00:07:48.464 "nvme_io": false, 00:07:48.464 "nvme_io_md": false, 00:07:48.464 "write_zeroes": true, 00:07:48.464 "zcopy": true, 00:07:48.464 "get_zone_info": false, 00:07:48.464 "zone_management": false, 00:07:48.464 "zone_append": false, 00:07:48.464 "compare": false, 00:07:48.464 "compare_and_write": false, 00:07:48.464 "abort": true, 00:07:48.464 "seek_hole": false, 00:07:48.464 "seek_data": false, 00:07:48.464 "copy": true, 00:07:48.464 "nvme_iov_md": false 00:07:48.464 }, 00:07:48.464 "memory_domains": [ 00:07:48.464 { 00:07:48.464 "dma_device_id": "system", 00:07:48.464 "dma_device_type": 1 00:07:48.464 }, 00:07:48.464 { 00:07:48.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.464 "dma_device_type": 2 00:07:48.464 } 00:07:48.464 ], 00:07:48.464 "driver_specific": {} 00:07:48.464 } 00:07:48.464 ] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 [2024-12-07 01:51:53.718297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.464 [2024-12-07 01:51:53.718340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.464 [2024-12-07 01:51:53.718359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.464 [2024-12-07 01:51:53.720196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.464 "name": "Existed_Raid", 00:07:48.464 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:48.464 "strip_size_kb": 64, 00:07:48.464 "state": "configuring", 00:07:48.464 "raid_level": "raid0", 00:07:48.464 "superblock": true, 00:07:48.464 "num_base_bdevs": 3, 00:07:48.464 "num_base_bdevs_discovered": 2, 00:07:48.464 "num_base_bdevs_operational": 3, 00:07:48.464 "base_bdevs_list": [ 00:07:48.464 { 00:07:48.464 "name": "BaseBdev1", 00:07:48.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.464 "is_configured": false, 00:07:48.464 "data_offset": 0, 00:07:48.464 "data_size": 0 00:07:48.464 }, 00:07:48.464 { 00:07:48.464 "name": "BaseBdev2", 00:07:48.464 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:48.464 "is_configured": true, 00:07:48.464 "data_offset": 2048, 00:07:48.464 "data_size": 63488 00:07:48.464 }, 00:07:48.464 { 00:07:48.464 "name": "BaseBdev3", 00:07:48.464 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:48.464 "is_configured": true, 00:07:48.464 "data_offset": 2048, 00:07:48.464 "data_size": 63488 00:07:48.464 } 00:07:48.464 ] 00:07:48.464 }' 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.464 01:51:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.724 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:48.724 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.724 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.983 [2024-12-07 01:51:54.185484] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.983 "name": "Existed_Raid", 00:07:48.983 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:48.983 "strip_size_kb": 64, 00:07:48.983 "state": "configuring", 00:07:48.983 "raid_level": "raid0", 00:07:48.983 "superblock": true, 00:07:48.983 "num_base_bdevs": 3, 00:07:48.983 "num_base_bdevs_discovered": 1, 00:07:48.983 "num_base_bdevs_operational": 3, 00:07:48.983 "base_bdevs_list": [ 00:07:48.983 { 00:07:48.983 "name": "BaseBdev1", 00:07:48.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.983 "is_configured": false, 00:07:48.983 "data_offset": 0, 00:07:48.983 "data_size": 0 00:07:48.983 }, 00:07:48.983 { 00:07:48.983 "name": null, 00:07:48.983 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:48.983 "is_configured": false, 00:07:48.983 "data_offset": 0, 00:07:48.983 "data_size": 63488 00:07:48.983 }, 00:07:48.983 { 00:07:48.983 "name": "BaseBdev3", 00:07:48.983 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:48.983 "is_configured": true, 00:07:48.983 "data_offset": 2048, 00:07:48.983 "data_size": 63488 00:07:48.983 } 00:07:48.983 ] 00:07:48.983 }' 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.983 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.243 [2024-12-07 01:51:54.671738] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.243 BaseBdev1 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.243 [ 00:07:49.243 { 00:07:49.243 "name": "BaseBdev1", 00:07:49.243 "aliases": [ 00:07:49.243 "0dc2b1ec-2e50-4477-ae78-93840dd5c2db" 00:07:49.243 ], 00:07:49.243 "product_name": "Malloc disk", 00:07:49.243 "block_size": 512, 00:07:49.243 "num_blocks": 65536, 00:07:49.243 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:49.243 "assigned_rate_limits": { 00:07:49.243 "rw_ios_per_sec": 0, 00:07:49.243 "rw_mbytes_per_sec": 0, 00:07:49.243 "r_mbytes_per_sec": 0, 00:07:49.243 "w_mbytes_per_sec": 0 00:07:49.243 }, 00:07:49.243 "claimed": true, 00:07:49.243 "claim_type": "exclusive_write", 00:07:49.243 "zoned": false, 00:07:49.243 "supported_io_types": { 00:07:49.243 "read": true, 00:07:49.243 "write": true, 00:07:49.243 "unmap": true, 00:07:49.243 "flush": true, 00:07:49.243 "reset": true, 00:07:49.243 "nvme_admin": false, 00:07:49.243 "nvme_io": false, 00:07:49.243 "nvme_io_md": false, 00:07:49.243 "write_zeroes": true, 00:07:49.243 "zcopy": true, 00:07:49.243 "get_zone_info": false, 00:07:49.243 "zone_management": false, 00:07:49.243 "zone_append": false, 00:07:49.243 "compare": false, 00:07:49.243 "compare_and_write": false, 00:07:49.243 "abort": true, 00:07:49.243 "seek_hole": false, 00:07:49.243 "seek_data": false, 00:07:49.243 "copy": true, 00:07:49.243 "nvme_iov_md": false 00:07:49.243 }, 00:07:49.243 "memory_domains": [ 00:07:49.243 { 00:07:49.243 "dma_device_id": "system", 00:07:49.243 "dma_device_type": 1 00:07:49.243 }, 00:07:49.243 { 00:07:49.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.243 "dma_device_type": 2 00:07:49.243 } 00:07:49.243 ], 00:07:49.243 "driver_specific": {} 00:07:49.243 } 00:07:49.243 ] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.243 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.503 "name": "Existed_Raid", 00:07:49.503 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:49.503 "strip_size_kb": 64, 00:07:49.503 "state": "configuring", 00:07:49.503 "raid_level": "raid0", 00:07:49.503 "superblock": true, 00:07:49.503 "num_base_bdevs": 3, 00:07:49.503 "num_base_bdevs_discovered": 2, 00:07:49.503 "num_base_bdevs_operational": 3, 00:07:49.503 "base_bdevs_list": [ 00:07:49.503 { 00:07:49.503 "name": "BaseBdev1", 00:07:49.503 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:49.503 "is_configured": true, 00:07:49.503 "data_offset": 2048, 00:07:49.503 "data_size": 63488 00:07:49.503 }, 00:07:49.503 { 00:07:49.503 "name": null, 00:07:49.503 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:49.503 "is_configured": false, 00:07:49.503 "data_offset": 0, 00:07:49.503 "data_size": 63488 00:07:49.503 }, 00:07:49.503 { 00:07:49.503 "name": "BaseBdev3", 00:07:49.503 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:49.503 "is_configured": true, 00:07:49.503 "data_offset": 2048, 00:07:49.503 "data_size": 63488 00:07:49.503 } 00:07:49.503 ] 00:07:49.503 }' 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.503 01:51:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.762 [2024-12-07 01:51:55.210834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.762 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.020 "name": "Existed_Raid", 00:07:50.020 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:50.020 "strip_size_kb": 64, 00:07:50.020 "state": "configuring", 00:07:50.020 "raid_level": "raid0", 00:07:50.020 "superblock": true, 00:07:50.020 "num_base_bdevs": 3, 00:07:50.020 "num_base_bdevs_discovered": 1, 00:07:50.020 "num_base_bdevs_operational": 3, 00:07:50.020 "base_bdevs_list": [ 00:07:50.020 { 00:07:50.020 "name": "BaseBdev1", 00:07:50.020 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:50.020 "is_configured": true, 00:07:50.020 "data_offset": 2048, 00:07:50.020 "data_size": 63488 00:07:50.020 }, 00:07:50.020 { 00:07:50.020 "name": null, 00:07:50.020 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:50.020 "is_configured": false, 00:07:50.020 "data_offset": 0, 00:07:50.020 "data_size": 63488 00:07:50.020 }, 00:07:50.020 { 00:07:50.020 "name": null, 00:07:50.020 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:50.020 "is_configured": false, 00:07:50.020 "data_offset": 0, 00:07:50.020 "data_size": 63488 00:07:50.020 } 00:07:50.020 ] 00:07:50.020 }' 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.020 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.279 [2024-12-07 01:51:55.658076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.279 "name": "Existed_Raid", 00:07:50.279 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:50.279 "strip_size_kb": 64, 00:07:50.279 "state": "configuring", 00:07:50.279 "raid_level": "raid0", 00:07:50.279 "superblock": true, 00:07:50.279 "num_base_bdevs": 3, 00:07:50.279 "num_base_bdevs_discovered": 2, 00:07:50.279 "num_base_bdevs_operational": 3, 00:07:50.279 "base_bdevs_list": [ 00:07:50.279 { 00:07:50.279 "name": "BaseBdev1", 00:07:50.279 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:50.279 "is_configured": true, 00:07:50.279 "data_offset": 2048, 00:07:50.279 "data_size": 63488 00:07:50.279 }, 00:07:50.279 { 00:07:50.279 "name": null, 00:07:50.279 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:50.279 "is_configured": false, 00:07:50.279 "data_offset": 0, 00:07:50.279 "data_size": 63488 00:07:50.279 }, 00:07:50.279 { 00:07:50.279 "name": "BaseBdev3", 00:07:50.279 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:50.279 "is_configured": true, 00:07:50.279 "data_offset": 2048, 00:07:50.279 "data_size": 63488 00:07:50.279 } 00:07:50.279 ] 00:07:50.279 }' 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.279 01:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.846 [2024-12-07 01:51:56.117337] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.846 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.846 "name": "Existed_Raid", 00:07:50.846 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:50.846 "strip_size_kb": 64, 00:07:50.846 "state": "configuring", 00:07:50.846 "raid_level": "raid0", 00:07:50.846 "superblock": true, 00:07:50.846 "num_base_bdevs": 3, 00:07:50.846 "num_base_bdevs_discovered": 1, 00:07:50.846 "num_base_bdevs_operational": 3, 00:07:50.846 "base_bdevs_list": [ 00:07:50.846 { 00:07:50.846 "name": null, 00:07:50.846 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:50.846 "is_configured": false, 00:07:50.846 "data_offset": 0, 00:07:50.846 "data_size": 63488 00:07:50.846 }, 00:07:50.847 { 00:07:50.847 "name": null, 00:07:50.847 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:50.847 "is_configured": false, 00:07:50.847 "data_offset": 0, 00:07:50.847 "data_size": 63488 00:07:50.847 }, 00:07:50.847 { 00:07:50.847 "name": "BaseBdev3", 00:07:50.847 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:50.847 "is_configured": true, 00:07:50.847 "data_offset": 2048, 00:07:50.847 "data_size": 63488 00:07:50.847 } 00:07:50.847 ] 00:07:50.847 }' 00:07:50.847 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.847 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.416 [2024-12-07 01:51:56.611048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.416 "name": "Existed_Raid", 00:07:51.416 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:51.416 "strip_size_kb": 64, 00:07:51.416 "state": "configuring", 00:07:51.416 "raid_level": "raid0", 00:07:51.416 "superblock": true, 00:07:51.416 "num_base_bdevs": 3, 00:07:51.416 "num_base_bdevs_discovered": 2, 00:07:51.416 "num_base_bdevs_operational": 3, 00:07:51.416 "base_bdevs_list": [ 00:07:51.416 { 00:07:51.416 "name": null, 00:07:51.416 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:51.416 "is_configured": false, 00:07:51.416 "data_offset": 0, 00:07:51.416 "data_size": 63488 00:07:51.416 }, 00:07:51.416 { 00:07:51.416 "name": "BaseBdev2", 00:07:51.416 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:51.416 "is_configured": true, 00:07:51.416 "data_offset": 2048, 00:07:51.416 "data_size": 63488 00:07:51.416 }, 00:07:51.416 { 00:07:51.416 "name": "BaseBdev3", 00:07:51.416 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:51.416 "is_configured": true, 00:07:51.416 "data_offset": 2048, 00:07:51.416 "data_size": 63488 00:07:51.416 } 00:07:51.416 ] 00:07:51.416 }' 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.416 01:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.675 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0dc2b1ec-2e50-4477-ae78-93840dd5c2db 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.935 [2024-12-07 01:51:57.153282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:51.935 [2024-12-07 01:51:57.153459] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:51.935 [2024-12-07 01:51:57.153476] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:51.935 [2024-12-07 01:51:57.153745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:51.935 NewBaseBdev 00:07:51.935 [2024-12-07 01:51:57.153885] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:51.935 [2024-12-07 01:51:57.153897] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:51.935 [2024-12-07 01:51:57.154015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.935 [ 00:07:51.935 { 00:07:51.935 "name": "NewBaseBdev", 00:07:51.935 "aliases": [ 00:07:51.935 "0dc2b1ec-2e50-4477-ae78-93840dd5c2db" 00:07:51.935 ], 00:07:51.935 "product_name": "Malloc disk", 00:07:51.935 "block_size": 512, 00:07:51.935 "num_blocks": 65536, 00:07:51.935 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:51.935 "assigned_rate_limits": { 00:07:51.935 "rw_ios_per_sec": 0, 00:07:51.935 "rw_mbytes_per_sec": 0, 00:07:51.935 "r_mbytes_per_sec": 0, 00:07:51.935 "w_mbytes_per_sec": 0 00:07:51.935 }, 00:07:51.935 "claimed": true, 00:07:51.935 "claim_type": "exclusive_write", 00:07:51.935 "zoned": false, 00:07:51.935 "supported_io_types": { 00:07:51.935 "read": true, 00:07:51.935 "write": true, 00:07:51.935 "unmap": true, 00:07:51.935 "flush": true, 00:07:51.935 "reset": true, 00:07:51.935 "nvme_admin": false, 00:07:51.935 "nvme_io": false, 00:07:51.935 "nvme_io_md": false, 00:07:51.935 "write_zeroes": true, 00:07:51.935 "zcopy": true, 00:07:51.935 "get_zone_info": false, 00:07:51.935 "zone_management": false, 00:07:51.935 "zone_append": false, 00:07:51.935 "compare": false, 00:07:51.935 "compare_and_write": false, 00:07:51.935 "abort": true, 00:07:51.935 "seek_hole": false, 00:07:51.935 "seek_data": false, 00:07:51.935 "copy": true, 00:07:51.935 "nvme_iov_md": false 00:07:51.935 }, 00:07:51.935 "memory_domains": [ 00:07:51.935 { 00:07:51.935 "dma_device_id": "system", 00:07:51.935 "dma_device_type": 1 00:07:51.935 }, 00:07:51.935 { 00:07:51.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.935 "dma_device_type": 2 00:07:51.935 } 00:07:51.935 ], 00:07:51.935 "driver_specific": {} 00:07:51.935 } 00:07:51.935 ] 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.935 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.936 "name": "Existed_Raid", 00:07:51.936 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:51.936 "strip_size_kb": 64, 00:07:51.936 "state": "online", 00:07:51.936 "raid_level": "raid0", 00:07:51.936 "superblock": true, 00:07:51.936 "num_base_bdevs": 3, 00:07:51.936 "num_base_bdevs_discovered": 3, 00:07:51.936 "num_base_bdevs_operational": 3, 00:07:51.936 "base_bdevs_list": [ 00:07:51.936 { 00:07:51.936 "name": "NewBaseBdev", 00:07:51.936 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:51.936 "is_configured": true, 00:07:51.936 "data_offset": 2048, 00:07:51.936 "data_size": 63488 00:07:51.936 }, 00:07:51.936 { 00:07:51.936 "name": "BaseBdev2", 00:07:51.936 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:51.936 "is_configured": true, 00:07:51.936 "data_offset": 2048, 00:07:51.936 "data_size": 63488 00:07:51.936 }, 00:07:51.936 { 00:07:51.936 "name": "BaseBdev3", 00:07:51.936 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:51.936 "is_configured": true, 00:07:51.936 "data_offset": 2048, 00:07:51.936 "data_size": 63488 00:07:51.936 } 00:07:51.936 ] 00:07:51.936 }' 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.936 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 [2024-12-07 01:51:57.652822] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.248 "name": "Existed_Raid", 00:07:52.248 "aliases": [ 00:07:52.248 "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb" 00:07:52.248 ], 00:07:52.248 "product_name": "Raid Volume", 00:07:52.248 "block_size": 512, 00:07:52.248 "num_blocks": 190464, 00:07:52.248 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:52.248 "assigned_rate_limits": { 00:07:52.248 "rw_ios_per_sec": 0, 00:07:52.248 "rw_mbytes_per_sec": 0, 00:07:52.248 "r_mbytes_per_sec": 0, 00:07:52.248 "w_mbytes_per_sec": 0 00:07:52.248 }, 00:07:52.248 "claimed": false, 00:07:52.248 "zoned": false, 00:07:52.248 "supported_io_types": { 00:07:52.248 "read": true, 00:07:52.248 "write": true, 00:07:52.248 "unmap": true, 00:07:52.248 "flush": true, 00:07:52.248 "reset": true, 00:07:52.248 "nvme_admin": false, 00:07:52.248 "nvme_io": false, 00:07:52.248 "nvme_io_md": false, 00:07:52.248 "write_zeroes": true, 00:07:52.248 "zcopy": false, 00:07:52.248 "get_zone_info": false, 00:07:52.248 "zone_management": false, 00:07:52.248 "zone_append": false, 00:07:52.248 "compare": false, 00:07:52.248 "compare_and_write": false, 00:07:52.248 "abort": false, 00:07:52.248 "seek_hole": false, 00:07:52.248 "seek_data": false, 00:07:52.248 "copy": false, 00:07:52.248 "nvme_iov_md": false 00:07:52.248 }, 00:07:52.248 "memory_domains": [ 00:07:52.248 { 00:07:52.248 "dma_device_id": "system", 00:07:52.248 "dma_device_type": 1 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.248 "dma_device_type": 2 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "dma_device_id": "system", 00:07:52.248 "dma_device_type": 1 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.248 "dma_device_type": 2 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "dma_device_id": "system", 00:07:52.248 "dma_device_type": 1 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.248 "dma_device_type": 2 00:07:52.248 } 00:07:52.248 ], 00:07:52.248 "driver_specific": { 00:07:52.248 "raid": { 00:07:52.248 "uuid": "7be8ee07-53fd-4b74-ae37-17ef1b4f19cb", 00:07:52.248 "strip_size_kb": 64, 00:07:52.248 "state": "online", 00:07:52.248 "raid_level": "raid0", 00:07:52.248 "superblock": true, 00:07:52.248 "num_base_bdevs": 3, 00:07:52.248 "num_base_bdevs_discovered": 3, 00:07:52.248 "num_base_bdevs_operational": 3, 00:07:52.248 "base_bdevs_list": [ 00:07:52.248 { 00:07:52.248 "name": "NewBaseBdev", 00:07:52.248 "uuid": "0dc2b1ec-2e50-4477-ae78-93840dd5c2db", 00:07:52.248 "is_configured": true, 00:07:52.248 "data_offset": 2048, 00:07:52.248 "data_size": 63488 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "name": "BaseBdev2", 00:07:52.248 "uuid": "84e255dd-5901-4d5f-a7d6-59f95c11897c", 00:07:52.248 "is_configured": true, 00:07:52.248 "data_offset": 2048, 00:07:52.248 "data_size": 63488 00:07:52.248 }, 00:07:52.248 { 00:07:52.248 "name": "BaseBdev3", 00:07:52.248 "uuid": "0363ac19-d254-4af8-bd19-729da8978aaa", 00:07:52.248 "is_configured": true, 00:07:52.248 "data_offset": 2048, 00:07:52.248 "data_size": 63488 00:07:52.248 } 00:07:52.248 ] 00:07:52.248 } 00:07:52.248 } 00:07:52.248 }' 00:07:52.248 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.506 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:52.506 BaseBdev2 00:07:52.506 BaseBdev3' 00:07:52.506 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.506 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.506 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.507 [2024-12-07 01:51:57.884151] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.507 [2024-12-07 01:51:57.884185] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.507 [2024-12-07 01:51:57.884266] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.507 [2024-12-07 01:51:57.884318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.507 [2024-12-07 01:51:57.884330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75375 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75375 ']' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 75375 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75375 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:52.507 killing process with pid 75375 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75375' 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 75375 00:07:52.507 [2024-12-07 01:51:57.937337] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.507 01:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 75375 00:07:52.766 [2024-12-07 01:51:57.969712] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.766 01:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.766 00:07:52.766 real 0m8.791s 00:07:52.766 user 0m15.050s 00:07:52.766 sys 0m1.792s 00:07:52.766 01:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.766 01:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.766 ************************************ 00:07:52.766 END TEST raid_state_function_test_sb 00:07:52.766 ************************************ 00:07:53.024 01:51:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:53.024 01:51:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:53.024 01:51:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.024 01:51:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:53.024 ************************************ 00:07:53.024 START TEST raid_superblock_test 00:07:53.024 ************************************ 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:53.024 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75979 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75979 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 75979 ']' 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:53.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:53.025 01:51:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.025 [2024-12-07 01:51:58.359271] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:53.025 [2024-12-07 01:51:58.359389] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75979 ] 00:07:53.283 [2024-12-07 01:51:58.506071] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.283 [2024-12-07 01:51:58.554077] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.283 [2024-12-07 01:51:58.596178] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.283 [2024-12-07 01:51:58.596219] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.850 malloc1 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.850 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 [2024-12-07 01:51:59.223470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.851 [2024-12-07 01:51:59.223558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.851 [2024-12-07 01:51:59.223581] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:53.851 [2024-12-07 01:51:59.223601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.851 [2024-12-07 01:51:59.225824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.851 [2024-12-07 01:51:59.225862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.851 pt1 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 malloc2 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 [2024-12-07 01:51:59.264714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:53.851 [2024-12-07 01:51:59.264786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.851 [2024-12-07 01:51:59.264804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.851 [2024-12-07 01:51:59.264816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.851 [2024-12-07 01:51:59.267072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.851 [2024-12-07 01:51:59.267111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:53.851 pt2 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 malloc3 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 [2024-12-07 01:51:59.293459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:53.851 [2024-12-07 01:51:59.293533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.851 [2024-12-07 01:51:59.293552] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:53.851 [2024-12-07 01:51:59.293562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.851 [2024-12-07 01:51:59.295644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.851 [2024-12-07 01:51:59.295691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:53.851 pt3 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.851 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.851 [2024-12-07 01:51:59.305510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:53.851 [2024-12-07 01:51:59.307532] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:53.851 [2024-12-07 01:51:59.307592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:53.851 [2024-12-07 01:51:59.307765] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:53.851 [2024-12-07 01:51:59.307777] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:53.851 [2024-12-07 01:51:59.308038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:53.851 [2024-12-07 01:51:59.308173] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:53.851 [2024-12-07 01:51:59.308188] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:53.851 [2024-12-07 01:51:59.308338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.109 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.110 "name": "raid_bdev1", 00:07:54.110 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:54.110 "strip_size_kb": 64, 00:07:54.110 "state": "online", 00:07:54.110 "raid_level": "raid0", 00:07:54.110 "superblock": true, 00:07:54.110 "num_base_bdevs": 3, 00:07:54.110 "num_base_bdevs_discovered": 3, 00:07:54.110 "num_base_bdevs_operational": 3, 00:07:54.110 "base_bdevs_list": [ 00:07:54.110 { 00:07:54.110 "name": "pt1", 00:07:54.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.110 "is_configured": true, 00:07:54.110 "data_offset": 2048, 00:07:54.110 "data_size": 63488 00:07:54.110 }, 00:07:54.110 { 00:07:54.110 "name": "pt2", 00:07:54.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.110 "is_configured": true, 00:07:54.110 "data_offset": 2048, 00:07:54.110 "data_size": 63488 00:07:54.110 }, 00:07:54.110 { 00:07:54.110 "name": "pt3", 00:07:54.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:54.110 "is_configured": true, 00:07:54.110 "data_offset": 2048, 00:07:54.110 "data_size": 63488 00:07:54.110 } 00:07:54.110 ] 00:07:54.110 }' 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.110 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.368 [2024-12-07 01:51:59.757033] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.368 "name": "raid_bdev1", 00:07:54.368 "aliases": [ 00:07:54.368 "755c8555-b665-4005-be75-21df445f36bc" 00:07:54.368 ], 00:07:54.368 "product_name": "Raid Volume", 00:07:54.368 "block_size": 512, 00:07:54.368 "num_blocks": 190464, 00:07:54.368 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:54.368 "assigned_rate_limits": { 00:07:54.368 "rw_ios_per_sec": 0, 00:07:54.368 "rw_mbytes_per_sec": 0, 00:07:54.368 "r_mbytes_per_sec": 0, 00:07:54.368 "w_mbytes_per_sec": 0 00:07:54.368 }, 00:07:54.368 "claimed": false, 00:07:54.368 "zoned": false, 00:07:54.368 "supported_io_types": { 00:07:54.368 "read": true, 00:07:54.368 "write": true, 00:07:54.368 "unmap": true, 00:07:54.368 "flush": true, 00:07:54.368 "reset": true, 00:07:54.368 "nvme_admin": false, 00:07:54.368 "nvme_io": false, 00:07:54.368 "nvme_io_md": false, 00:07:54.368 "write_zeroes": true, 00:07:54.368 "zcopy": false, 00:07:54.368 "get_zone_info": false, 00:07:54.368 "zone_management": false, 00:07:54.368 "zone_append": false, 00:07:54.368 "compare": false, 00:07:54.368 "compare_and_write": false, 00:07:54.368 "abort": false, 00:07:54.368 "seek_hole": false, 00:07:54.368 "seek_data": false, 00:07:54.368 "copy": false, 00:07:54.368 "nvme_iov_md": false 00:07:54.368 }, 00:07:54.368 "memory_domains": [ 00:07:54.368 { 00:07:54.368 "dma_device_id": "system", 00:07:54.368 "dma_device_type": 1 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.368 "dma_device_type": 2 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "dma_device_id": "system", 00:07:54.368 "dma_device_type": 1 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.368 "dma_device_type": 2 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "dma_device_id": "system", 00:07:54.368 "dma_device_type": 1 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.368 "dma_device_type": 2 00:07:54.368 } 00:07:54.368 ], 00:07:54.368 "driver_specific": { 00:07:54.368 "raid": { 00:07:54.368 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:54.368 "strip_size_kb": 64, 00:07:54.368 "state": "online", 00:07:54.368 "raid_level": "raid0", 00:07:54.368 "superblock": true, 00:07:54.368 "num_base_bdevs": 3, 00:07:54.368 "num_base_bdevs_discovered": 3, 00:07:54.368 "num_base_bdevs_operational": 3, 00:07:54.368 "base_bdevs_list": [ 00:07:54.368 { 00:07:54.368 "name": "pt1", 00:07:54.368 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.368 "is_configured": true, 00:07:54.368 "data_offset": 2048, 00:07:54.368 "data_size": 63488 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "name": "pt2", 00:07:54.368 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.368 "is_configured": true, 00:07:54.368 "data_offset": 2048, 00:07:54.368 "data_size": 63488 00:07:54.368 }, 00:07:54.368 { 00:07:54.368 "name": "pt3", 00:07:54.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:54.368 "is_configured": true, 00:07:54.368 "data_offset": 2048, 00:07:54.368 "data_size": 63488 00:07:54.368 } 00:07:54.368 ] 00:07:54.368 } 00:07:54.368 } 00:07:54.368 }' 00:07:54.368 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.627 pt2 00:07:54.627 pt3' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.627 01:51:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.627 [2024-12-07 01:52:00.032462] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=755c8555-b665-4005-be75-21df445f36bc 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 755c8555-b665-4005-be75-21df445f36bc ']' 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.627 [2024-12-07 01:52:00.076147] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.627 [2024-12-07 01:52:00.076228] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.627 [2024-12-07 01:52:00.076332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.627 [2024-12-07 01:52:00.076393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.627 [2024-12-07 01:52:00.076407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.627 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 [2024-12-07 01:52:00.227939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.887 [2024-12-07 01:52:00.229962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.887 [2024-12-07 01:52:00.230015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:54.887 [2024-12-07 01:52:00.230084] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:54.887 [2024-12-07 01:52:00.230145] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:54.887 [2024-12-07 01:52:00.230167] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:54.887 [2024-12-07 01:52:00.230181] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.887 [2024-12-07 01:52:00.230204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:54.887 request: 00:07:54.887 { 00:07:54.887 "name": "raid_bdev1", 00:07:54.887 "raid_level": "raid0", 00:07:54.887 "base_bdevs": [ 00:07:54.887 "malloc1", 00:07:54.887 "malloc2", 00:07:54.887 "malloc3" 00:07:54.887 ], 00:07:54.887 "strip_size_kb": 64, 00:07:54.887 "superblock": false, 00:07:54.887 "method": "bdev_raid_create", 00:07:54.887 "req_id": 1 00:07:54.887 } 00:07:54.887 Got JSON-RPC error response 00:07:54.887 response: 00:07:54.887 { 00:07:54.887 "code": -17, 00:07:54.887 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.887 } 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.887 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.887 [2024-12-07 01:52:00.291782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.887 [2024-12-07 01:52:00.291914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.887 [2024-12-07 01:52:00.291960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:54.887 [2024-12-07 01:52:00.291992] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.887 [2024-12-07 01:52:00.294227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.887 [2024-12-07 01:52:00.294324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.887 [2024-12-07 01:52:00.294441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:54.888 [2024-12-07 01:52:00.294509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.888 pt1 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.888 "name": "raid_bdev1", 00:07:54.888 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:54.888 "strip_size_kb": 64, 00:07:54.888 "state": "configuring", 00:07:54.888 "raid_level": "raid0", 00:07:54.888 "superblock": true, 00:07:54.888 "num_base_bdevs": 3, 00:07:54.888 "num_base_bdevs_discovered": 1, 00:07:54.888 "num_base_bdevs_operational": 3, 00:07:54.888 "base_bdevs_list": [ 00:07:54.888 { 00:07:54.888 "name": "pt1", 00:07:54.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.888 "is_configured": true, 00:07:54.888 "data_offset": 2048, 00:07:54.888 "data_size": 63488 00:07:54.888 }, 00:07:54.888 { 00:07:54.888 "name": null, 00:07:54.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.888 "is_configured": false, 00:07:54.888 "data_offset": 2048, 00:07:54.888 "data_size": 63488 00:07:54.888 }, 00:07:54.888 { 00:07:54.888 "name": null, 00:07:54.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:54.888 "is_configured": false, 00:07:54.888 "data_offset": 2048, 00:07:54.888 "data_size": 63488 00:07:54.888 } 00:07:54.888 ] 00:07:54.888 }' 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.888 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.459 [2024-12-07 01:52:00.763028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.459 [2024-12-07 01:52:00.763162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.459 [2024-12-07 01:52:00.763204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.459 [2024-12-07 01:52:00.763237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.459 [2024-12-07 01:52:00.763673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.459 [2024-12-07 01:52:00.763710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.459 [2024-12-07 01:52:00.763785] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.459 [2024-12-07 01:52:00.763811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.459 pt2 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.459 [2024-12-07 01:52:00.775003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.459 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.459 "name": "raid_bdev1", 00:07:55.459 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:55.459 "strip_size_kb": 64, 00:07:55.459 "state": "configuring", 00:07:55.459 "raid_level": "raid0", 00:07:55.459 "superblock": true, 00:07:55.459 "num_base_bdevs": 3, 00:07:55.459 "num_base_bdevs_discovered": 1, 00:07:55.459 "num_base_bdevs_operational": 3, 00:07:55.460 "base_bdevs_list": [ 00:07:55.460 { 00:07:55.460 "name": "pt1", 00:07:55.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.460 "is_configured": true, 00:07:55.460 "data_offset": 2048, 00:07:55.460 "data_size": 63488 00:07:55.460 }, 00:07:55.460 { 00:07:55.460 "name": null, 00:07:55.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.460 "is_configured": false, 00:07:55.460 "data_offset": 0, 00:07:55.460 "data_size": 63488 00:07:55.460 }, 00:07:55.460 { 00:07:55.460 "name": null, 00:07:55.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:55.460 "is_configured": false, 00:07:55.460 "data_offset": 2048, 00:07:55.460 "data_size": 63488 00:07:55.460 } 00:07:55.460 ] 00:07:55.460 }' 00:07:55.460 01:52:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.460 01:52:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.029 [2024-12-07 01:52:01.242189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.029 [2024-12-07 01:52:01.242287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.029 [2024-12-07 01:52:01.242325] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:56.029 [2024-12-07 01:52:01.242352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.029 [2024-12-07 01:52:01.242768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.029 [2024-12-07 01:52:01.242827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.029 [2024-12-07 01:52:01.242960] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.029 [2024-12-07 01:52:01.243011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.029 pt2 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.029 [2024-12-07 01:52:01.250145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:56.029 [2024-12-07 01:52:01.250221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.029 [2024-12-07 01:52:01.250262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:56.029 [2024-12-07 01:52:01.250291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.029 [2024-12-07 01:52:01.250621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.029 [2024-12-07 01:52:01.250686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:56.029 [2024-12-07 01:52:01.250769] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:56.029 [2024-12-07 01:52:01.250823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:56.029 [2024-12-07 01:52:01.250955] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:56.029 [2024-12-07 01:52:01.250992] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:56.029 [2024-12-07 01:52:01.251230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:56.029 [2024-12-07 01:52:01.251361] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:56.029 [2024-12-07 01:52:01.251400] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:56.029 [2024-12-07 01:52:01.251526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.029 pt3 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.029 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.029 "name": "raid_bdev1", 00:07:56.029 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:56.029 "strip_size_kb": 64, 00:07:56.029 "state": "online", 00:07:56.029 "raid_level": "raid0", 00:07:56.029 "superblock": true, 00:07:56.029 "num_base_bdevs": 3, 00:07:56.029 "num_base_bdevs_discovered": 3, 00:07:56.029 "num_base_bdevs_operational": 3, 00:07:56.029 "base_bdevs_list": [ 00:07:56.029 { 00:07:56.029 "name": "pt1", 00:07:56.029 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.029 "is_configured": true, 00:07:56.029 "data_offset": 2048, 00:07:56.029 "data_size": 63488 00:07:56.030 }, 00:07:56.030 { 00:07:56.030 "name": "pt2", 00:07:56.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.030 "is_configured": true, 00:07:56.030 "data_offset": 2048, 00:07:56.030 "data_size": 63488 00:07:56.030 }, 00:07:56.030 { 00:07:56.030 "name": "pt3", 00:07:56.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:56.030 "is_configured": true, 00:07:56.030 "data_offset": 2048, 00:07:56.030 "data_size": 63488 00:07:56.030 } 00:07:56.030 ] 00:07:56.030 }' 00:07:56.030 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.030 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.289 [2024-12-07 01:52:01.713702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.289 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:56.550 "name": "raid_bdev1", 00:07:56.550 "aliases": [ 00:07:56.550 "755c8555-b665-4005-be75-21df445f36bc" 00:07:56.550 ], 00:07:56.550 "product_name": "Raid Volume", 00:07:56.550 "block_size": 512, 00:07:56.550 "num_blocks": 190464, 00:07:56.550 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:56.550 "assigned_rate_limits": { 00:07:56.550 "rw_ios_per_sec": 0, 00:07:56.550 "rw_mbytes_per_sec": 0, 00:07:56.550 "r_mbytes_per_sec": 0, 00:07:56.550 "w_mbytes_per_sec": 0 00:07:56.550 }, 00:07:56.550 "claimed": false, 00:07:56.550 "zoned": false, 00:07:56.550 "supported_io_types": { 00:07:56.550 "read": true, 00:07:56.550 "write": true, 00:07:56.550 "unmap": true, 00:07:56.550 "flush": true, 00:07:56.550 "reset": true, 00:07:56.550 "nvme_admin": false, 00:07:56.550 "nvme_io": false, 00:07:56.550 "nvme_io_md": false, 00:07:56.550 "write_zeroes": true, 00:07:56.550 "zcopy": false, 00:07:56.550 "get_zone_info": false, 00:07:56.550 "zone_management": false, 00:07:56.550 "zone_append": false, 00:07:56.550 "compare": false, 00:07:56.550 "compare_and_write": false, 00:07:56.550 "abort": false, 00:07:56.550 "seek_hole": false, 00:07:56.550 "seek_data": false, 00:07:56.550 "copy": false, 00:07:56.550 "nvme_iov_md": false 00:07:56.550 }, 00:07:56.550 "memory_domains": [ 00:07:56.550 { 00:07:56.550 "dma_device_id": "system", 00:07:56.550 "dma_device_type": 1 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.550 "dma_device_type": 2 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "dma_device_id": "system", 00:07:56.550 "dma_device_type": 1 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.550 "dma_device_type": 2 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "dma_device_id": "system", 00:07:56.550 "dma_device_type": 1 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.550 "dma_device_type": 2 00:07:56.550 } 00:07:56.550 ], 00:07:56.550 "driver_specific": { 00:07:56.550 "raid": { 00:07:56.550 "uuid": "755c8555-b665-4005-be75-21df445f36bc", 00:07:56.550 "strip_size_kb": 64, 00:07:56.550 "state": "online", 00:07:56.550 "raid_level": "raid0", 00:07:56.550 "superblock": true, 00:07:56.550 "num_base_bdevs": 3, 00:07:56.550 "num_base_bdevs_discovered": 3, 00:07:56.550 "num_base_bdevs_operational": 3, 00:07:56.550 "base_bdevs_list": [ 00:07:56.550 { 00:07:56.550 "name": "pt1", 00:07:56.550 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:56.550 "is_configured": true, 00:07:56.550 "data_offset": 2048, 00:07:56.550 "data_size": 63488 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "name": "pt2", 00:07:56.550 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.550 "is_configured": true, 00:07:56.550 "data_offset": 2048, 00:07:56.550 "data_size": 63488 00:07:56.550 }, 00:07:56.550 { 00:07:56.550 "name": "pt3", 00:07:56.550 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:56.550 "is_configured": true, 00:07:56.550 "data_offset": 2048, 00:07:56.550 "data_size": 63488 00:07:56.550 } 00:07:56.550 ] 00:07:56.550 } 00:07:56.550 } 00:07:56.550 }' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:56.550 pt2 00:07:56.550 pt3' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.550 01:52:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.550 [2024-12-07 01:52:01.997176] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 755c8555-b665-4005-be75-21df445f36bc '!=' 755c8555-b665-4005-be75-21df445f36bc ']' 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75979 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 75979 ']' 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 75979 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75979 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:56.811 killing process with pid 75979 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75979' 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 75979 00:07:56.811 [2024-12-07 01:52:02.083348] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.811 [2024-12-07 01:52:02.083455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.811 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 75979 00:07:56.811 [2024-12-07 01:52:02.083535] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.811 [2024-12-07 01:52:02.083548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:56.811 [2024-12-07 01:52:02.118071] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.072 01:52:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:57.072 ************************************ 00:07:57.072 END TEST raid_superblock_test 00:07:57.072 ************************************ 00:07:57.072 00:07:57.072 real 0m4.087s 00:07:57.072 user 0m6.496s 00:07:57.072 sys 0m0.858s 00:07:57.072 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.072 01:52:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 01:52:02 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:57.072 01:52:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:57.072 01:52:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.072 01:52:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.072 ************************************ 00:07:57.072 START TEST raid_read_error_test 00:07:57.072 ************************************ 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Dt0qa85fz1 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76221 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76221 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 76221 ']' 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.072 01:52:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.333 [2024-12-07 01:52:02.537336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.333 [2024-12-07 01:52:02.537456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76221 ] 00:07:57.333 [2024-12-07 01:52:02.682539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.333 [2024-12-07 01:52:02.727578] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.333 [2024-12-07 01:52:02.769141] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.333 [2024-12-07 01:52:02.769180] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 BaseBdev1_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 true 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 [2024-12-07 01:52:03.407017] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:58.275 [2024-12-07 01:52:03.407129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.275 [2024-12-07 01:52:03.407160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:58.275 [2024-12-07 01:52:03.407177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.275 [2024-12-07 01:52:03.409528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.275 [2024-12-07 01:52:03.409563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:58.275 BaseBdev1 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 BaseBdev2_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 true 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 [2024-12-07 01:52:03.457381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:58.275 [2024-12-07 01:52:03.457447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.275 [2024-12-07 01:52:03.457470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:58.275 [2024-12-07 01:52:03.457479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.275 [2024-12-07 01:52:03.459783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.275 [2024-12-07 01:52:03.459820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:58.275 BaseBdev2 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 BaseBdev3_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 true 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 [2024-12-07 01:52:03.498294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:58.275 [2024-12-07 01:52:03.498343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.275 [2024-12-07 01:52:03.498366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:58.275 [2024-12-07 01:52:03.498375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.275 [2024-12-07 01:52:03.500688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.275 [2024-12-07 01:52:03.500720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:58.275 BaseBdev3 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 [2024-12-07 01:52:03.510360] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:58.275 [2024-12-07 01:52:03.512301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.275 [2024-12-07 01:52:03.512398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.275 [2024-12-07 01:52:03.512574] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:58.275 [2024-12-07 01:52:03.512589] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:58.275 [2024-12-07 01:52:03.512914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:58.275 [2024-12-07 01:52:03.513091] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:58.275 [2024-12-07 01:52:03.513111] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:58.275 [2024-12-07 01:52:03.513265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.275 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.276 "name": "raid_bdev1", 00:07:58.276 "uuid": "d06b3ec9-0570-4712-8979-d37416cb3323", 00:07:58.276 "strip_size_kb": 64, 00:07:58.276 "state": "online", 00:07:58.276 "raid_level": "raid0", 00:07:58.276 "superblock": true, 00:07:58.276 "num_base_bdevs": 3, 00:07:58.276 "num_base_bdevs_discovered": 3, 00:07:58.276 "num_base_bdevs_operational": 3, 00:07:58.276 "base_bdevs_list": [ 00:07:58.276 { 00:07:58.276 "name": "BaseBdev1", 00:07:58.276 "uuid": "dd250457-6fd0-54a9-83fb-c87e9422fa47", 00:07:58.276 "is_configured": true, 00:07:58.276 "data_offset": 2048, 00:07:58.276 "data_size": 63488 00:07:58.276 }, 00:07:58.276 { 00:07:58.276 "name": "BaseBdev2", 00:07:58.276 "uuid": "7a33fa65-1663-5e82-8165-bb9b740fb4ac", 00:07:58.276 "is_configured": true, 00:07:58.276 "data_offset": 2048, 00:07:58.276 "data_size": 63488 00:07:58.276 }, 00:07:58.276 { 00:07:58.276 "name": "BaseBdev3", 00:07:58.276 "uuid": "538fda42-8949-5358-8af4-7d2326580b55", 00:07:58.276 "is_configured": true, 00:07:58.276 "data_offset": 2048, 00:07:58.276 "data_size": 63488 00:07:58.276 } 00:07:58.276 ] 00:07:58.276 }' 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.276 01:52:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.536 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:58.536 01:52:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:58.796 [2024-12-07 01:52:04.069791] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.736 01:52:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.736 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.736 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.736 "name": "raid_bdev1", 00:07:59.736 "uuid": "d06b3ec9-0570-4712-8979-d37416cb3323", 00:07:59.736 "strip_size_kb": 64, 00:07:59.736 "state": "online", 00:07:59.736 "raid_level": "raid0", 00:07:59.736 "superblock": true, 00:07:59.736 "num_base_bdevs": 3, 00:07:59.736 "num_base_bdevs_discovered": 3, 00:07:59.736 "num_base_bdevs_operational": 3, 00:07:59.736 "base_bdevs_list": [ 00:07:59.736 { 00:07:59.736 "name": "BaseBdev1", 00:07:59.736 "uuid": "dd250457-6fd0-54a9-83fb-c87e9422fa47", 00:07:59.736 "is_configured": true, 00:07:59.736 "data_offset": 2048, 00:07:59.736 "data_size": 63488 00:07:59.736 }, 00:07:59.736 { 00:07:59.736 "name": "BaseBdev2", 00:07:59.736 "uuid": "7a33fa65-1663-5e82-8165-bb9b740fb4ac", 00:07:59.736 "is_configured": true, 00:07:59.736 "data_offset": 2048, 00:07:59.736 "data_size": 63488 00:07:59.736 }, 00:07:59.736 { 00:07:59.736 "name": "BaseBdev3", 00:07:59.736 "uuid": "538fda42-8949-5358-8af4-7d2326580b55", 00:07:59.736 "is_configured": true, 00:07:59.736 "data_offset": 2048, 00:07:59.736 "data_size": 63488 00:07:59.736 } 00:07:59.736 ] 00:07:59.736 }' 00:07:59.736 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.736 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.995 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.995 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.995 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.995 [2024-12-07 01:52:05.449681] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.995 [2024-12-07 01:52:05.449719] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.995 [2024-12-07 01:52:05.452303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.995 [2024-12-07 01:52:05.452354] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.995 [2024-12-07 01:52:05.452406] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.995 [2024-12-07 01:52:05.452417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:00.255 { 00:08:00.255 "results": [ 00:08:00.255 { 00:08:00.255 "job": "raid_bdev1", 00:08:00.255 "core_mask": "0x1", 00:08:00.255 "workload": "randrw", 00:08:00.255 "percentage": 50, 00:08:00.255 "status": "finished", 00:08:00.255 "queue_depth": 1, 00:08:00.255 "io_size": 131072, 00:08:00.255 "runtime": 1.38058, 00:08:00.255 "iops": 16460.47313447971, 00:08:00.255 "mibps": 2057.559141809964, 00:08:00.255 "io_failed": 1, 00:08:00.255 "io_timeout": 0, 00:08:00.255 "avg_latency_us": 84.20796556048187, 00:08:00.255 "min_latency_us": 25.823580786026202, 00:08:00.255 "max_latency_us": 1373.6803493449781 00:08:00.255 } 00:08:00.255 ], 00:08:00.255 "core_count": 1 00:08:00.255 } 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76221 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 76221 ']' 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 76221 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76221 00:08:00.255 killing process with pid 76221 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76221' 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 76221 00:08:00.255 [2024-12-07 01:52:05.485873] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.255 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 76221 00:08:00.255 [2024-12-07 01:52:05.512507] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Dt0qa85fz1 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:00.515 ************************************ 00:08:00.515 END TEST raid_read_error_test 00:08:00.515 ************************************ 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:00.515 00:08:00.515 real 0m3.318s 00:08:00.515 user 0m4.244s 00:08:00.515 sys 0m0.507s 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.515 01:52:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.515 01:52:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:00.515 01:52:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:00.515 01:52:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.515 01:52:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.515 ************************************ 00:08:00.515 START TEST raid_write_error_test 00:08:00.515 ************************************ 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IC0oVvpQ09 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76350 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76350 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 76350 ']' 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.515 01:52:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.515 [2024-12-07 01:52:05.920726] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:00.515 [2024-12-07 01:52:05.920849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76350 ] 00:08:00.775 [2024-12-07 01:52:06.059575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.775 [2024-12-07 01:52:06.110222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.775 [2024-12-07 01:52:06.153215] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.775 [2024-12-07 01:52:06.153263] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.346 BaseBdev1_malloc 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.346 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 true 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 [2024-12-07 01:52:06.823600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:01.607 [2024-12-07 01:52:06.823693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.607 [2024-12-07 01:52:06.823724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:01.607 [2024-12-07 01:52:06.823733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.607 [2024-12-07 01:52:06.826023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.607 [2024-12-07 01:52:06.826061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:01.607 BaseBdev1 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 BaseBdev2_malloc 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 true 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 [2024-12-07 01:52:06.874377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:01.607 [2024-12-07 01:52:06.874449] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.607 [2024-12-07 01:52:06.874468] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:01.607 [2024-12-07 01:52:06.874476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.607 [2024-12-07 01:52:06.876566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.607 [2024-12-07 01:52:06.876600] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:01.607 BaseBdev2 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 BaseBdev3_malloc 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 true 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 [2024-12-07 01:52:06.914570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:01.607 [2024-12-07 01:52:06.914631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.607 [2024-12-07 01:52:06.914651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:01.607 [2024-12-07 01:52:06.914660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.607 [2024-12-07 01:52:06.916798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.607 [2024-12-07 01:52:06.916829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:01.607 BaseBdev3 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.607 [2024-12-07 01:52:06.926625] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.607 [2024-12-07 01:52:06.928471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.607 [2024-12-07 01:52:06.928561] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:01.607 [2024-12-07 01:52:06.928738] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:01.607 [2024-12-07 01:52:06.928753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:01.607 [2024-12-07 01:52:06.928991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:01.607 [2024-12-07 01:52:06.929125] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:01.607 [2024-12-07 01:52:06.929135] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:01.607 [2024-12-07 01:52:06.929252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.607 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.608 "name": "raid_bdev1", 00:08:01.608 "uuid": "678cdcf6-6722-48a1-9e81-53a388e84605", 00:08:01.608 "strip_size_kb": 64, 00:08:01.608 "state": "online", 00:08:01.608 "raid_level": "raid0", 00:08:01.608 "superblock": true, 00:08:01.608 "num_base_bdevs": 3, 00:08:01.608 "num_base_bdevs_discovered": 3, 00:08:01.608 "num_base_bdevs_operational": 3, 00:08:01.608 "base_bdevs_list": [ 00:08:01.608 { 00:08:01.608 "name": "BaseBdev1", 00:08:01.608 "uuid": "020abab6-c951-5fa6-8111-e3e6cd506675", 00:08:01.608 "is_configured": true, 00:08:01.608 "data_offset": 2048, 00:08:01.608 "data_size": 63488 00:08:01.608 }, 00:08:01.608 { 00:08:01.608 "name": "BaseBdev2", 00:08:01.608 "uuid": "b5d8b28e-66d2-5a92-b47b-62974624c49e", 00:08:01.608 "is_configured": true, 00:08:01.608 "data_offset": 2048, 00:08:01.608 "data_size": 63488 00:08:01.608 }, 00:08:01.608 { 00:08:01.608 "name": "BaseBdev3", 00:08:01.608 "uuid": "208f2340-ce9d-5897-860d-ca56b58171de", 00:08:01.608 "is_configured": true, 00:08:01.608 "data_offset": 2048, 00:08:01.608 "data_size": 63488 00:08:01.608 } 00:08:01.608 ] 00:08:01.608 }' 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.608 01:52:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.178 01:52:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:02.178 01:52:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:02.178 [2024-12-07 01:52:07.454109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.121 "name": "raid_bdev1", 00:08:03.121 "uuid": "678cdcf6-6722-48a1-9e81-53a388e84605", 00:08:03.121 "strip_size_kb": 64, 00:08:03.121 "state": "online", 00:08:03.121 "raid_level": "raid0", 00:08:03.121 "superblock": true, 00:08:03.121 "num_base_bdevs": 3, 00:08:03.121 "num_base_bdevs_discovered": 3, 00:08:03.121 "num_base_bdevs_operational": 3, 00:08:03.121 "base_bdevs_list": [ 00:08:03.121 { 00:08:03.121 "name": "BaseBdev1", 00:08:03.121 "uuid": "020abab6-c951-5fa6-8111-e3e6cd506675", 00:08:03.121 "is_configured": true, 00:08:03.121 "data_offset": 2048, 00:08:03.121 "data_size": 63488 00:08:03.121 }, 00:08:03.121 { 00:08:03.121 "name": "BaseBdev2", 00:08:03.121 "uuid": "b5d8b28e-66d2-5a92-b47b-62974624c49e", 00:08:03.121 "is_configured": true, 00:08:03.121 "data_offset": 2048, 00:08:03.121 "data_size": 63488 00:08:03.121 }, 00:08:03.121 { 00:08:03.121 "name": "BaseBdev3", 00:08:03.121 "uuid": "208f2340-ce9d-5897-860d-ca56b58171de", 00:08:03.121 "is_configured": true, 00:08:03.121 "data_offset": 2048, 00:08:03.121 "data_size": 63488 00:08:03.121 } 00:08:03.121 ] 00:08:03.121 }' 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.121 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.382 [2024-12-07 01:52:08.777710] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:03.382 [2024-12-07 01:52:08.777738] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:03.382 [2024-12-07 01:52:08.780156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.382 [2024-12-07 01:52:08.780212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.382 [2024-12-07 01:52:08.780247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.382 [2024-12-07 01:52:08.780257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:03.382 { 00:08:03.382 "results": [ 00:08:03.382 { 00:08:03.382 "job": "raid_bdev1", 00:08:03.382 "core_mask": "0x1", 00:08:03.382 "workload": "randrw", 00:08:03.382 "percentage": 50, 00:08:03.382 "status": "finished", 00:08:03.382 "queue_depth": 1, 00:08:03.382 "io_size": 131072, 00:08:03.382 "runtime": 1.324268, 00:08:03.382 "iops": 16792.67338635382, 00:08:03.382 "mibps": 2099.0841732942276, 00:08:03.382 "io_failed": 1, 00:08:03.382 "io_timeout": 0, 00:08:03.382 "avg_latency_us": 82.65795935422467, 00:08:03.382 "min_latency_us": 21.351965065502185, 00:08:03.382 "max_latency_us": 1359.3711790393013 00:08:03.382 } 00:08:03.382 ], 00:08:03.382 "core_count": 1 00:08:03.382 } 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76350 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 76350 ']' 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 76350 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76350 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.382 killing process with pid 76350 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76350' 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 76350 00:08:03.382 [2024-12-07 01:52:08.817758] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.382 01:52:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 76350 00:08:03.642 [2024-12-07 01:52:08.843275] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IC0oVvpQ09 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:08:03.642 00:08:03.642 real 0m3.258s 00:08:03.642 user 0m4.141s 00:08:03.642 sys 0m0.494s 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.642 01:52:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.642 ************************************ 00:08:03.642 END TEST raid_write_error_test 00:08:03.642 ************************************ 00:08:03.903 01:52:09 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:03.903 01:52:09 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:03.903 01:52:09 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:03.903 01:52:09 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.904 01:52:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.904 ************************************ 00:08:03.904 START TEST raid_state_function_test 00:08:03.904 ************************************ 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76477 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.904 Process raid pid: 76477 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76477' 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76477 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 76477 ']' 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:03.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:03.904 01:52:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.904 [2024-12-07 01:52:09.231753] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:03.904 [2024-12-07 01:52:09.231876] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.164 [2024-12-07 01:52:09.378139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.164 [2024-12-07 01:52:09.423725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.164 [2024-12-07 01:52:09.464668] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.164 [2024-12-07 01:52:09.464726] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.736 [2024-12-07 01:52:10.081297] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.736 [2024-12-07 01:52:10.081353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.736 [2024-12-07 01:52:10.081365] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.736 [2024-12-07 01:52:10.081375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.736 [2024-12-07 01:52:10.081381] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.736 [2024-12-07 01:52:10.081394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.736 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.736 "name": "Existed_Raid", 00:08:04.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.736 "strip_size_kb": 64, 00:08:04.736 "state": "configuring", 00:08:04.736 "raid_level": "concat", 00:08:04.736 "superblock": false, 00:08:04.736 "num_base_bdevs": 3, 00:08:04.736 "num_base_bdevs_discovered": 0, 00:08:04.736 "num_base_bdevs_operational": 3, 00:08:04.736 "base_bdevs_list": [ 00:08:04.736 { 00:08:04.736 "name": "BaseBdev1", 00:08:04.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.736 "is_configured": false, 00:08:04.736 "data_offset": 0, 00:08:04.736 "data_size": 0 00:08:04.736 }, 00:08:04.736 { 00:08:04.736 "name": "BaseBdev2", 00:08:04.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.737 "is_configured": false, 00:08:04.737 "data_offset": 0, 00:08:04.737 "data_size": 0 00:08:04.737 }, 00:08:04.737 { 00:08:04.737 "name": "BaseBdev3", 00:08:04.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.737 "is_configured": false, 00:08:04.737 "data_offset": 0, 00:08:04.737 "data_size": 0 00:08:04.737 } 00:08:04.737 ] 00:08:04.737 }' 00:08:04.737 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.737 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.308 [2024-12-07 01:52:10.536438] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.308 [2024-12-07 01:52:10.536484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.308 [2024-12-07 01:52:10.548425] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:05.308 [2024-12-07 01:52:10.548466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:05.308 [2024-12-07 01:52:10.548474] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.308 [2024-12-07 01:52:10.548570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.308 [2024-12-07 01:52:10.548576] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.308 [2024-12-07 01:52:10.548584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.308 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.309 [2024-12-07 01:52:10.568996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.309 BaseBdev1 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.309 [ 00:08:05.309 { 00:08:05.309 "name": "BaseBdev1", 00:08:05.309 "aliases": [ 00:08:05.309 "442d7533-525f-4cc2-a0e7-daa490a309bc" 00:08:05.309 ], 00:08:05.309 "product_name": "Malloc disk", 00:08:05.309 "block_size": 512, 00:08:05.309 "num_blocks": 65536, 00:08:05.309 "uuid": "442d7533-525f-4cc2-a0e7-daa490a309bc", 00:08:05.309 "assigned_rate_limits": { 00:08:05.309 "rw_ios_per_sec": 0, 00:08:05.309 "rw_mbytes_per_sec": 0, 00:08:05.309 "r_mbytes_per_sec": 0, 00:08:05.309 "w_mbytes_per_sec": 0 00:08:05.309 }, 00:08:05.309 "claimed": true, 00:08:05.309 "claim_type": "exclusive_write", 00:08:05.309 "zoned": false, 00:08:05.309 "supported_io_types": { 00:08:05.309 "read": true, 00:08:05.309 "write": true, 00:08:05.309 "unmap": true, 00:08:05.309 "flush": true, 00:08:05.309 "reset": true, 00:08:05.309 "nvme_admin": false, 00:08:05.309 "nvme_io": false, 00:08:05.309 "nvme_io_md": false, 00:08:05.309 "write_zeroes": true, 00:08:05.309 "zcopy": true, 00:08:05.309 "get_zone_info": false, 00:08:05.309 "zone_management": false, 00:08:05.309 "zone_append": false, 00:08:05.309 "compare": false, 00:08:05.309 "compare_and_write": false, 00:08:05.309 "abort": true, 00:08:05.309 "seek_hole": false, 00:08:05.309 "seek_data": false, 00:08:05.309 "copy": true, 00:08:05.309 "nvme_iov_md": false 00:08:05.309 }, 00:08:05.309 "memory_domains": [ 00:08:05.309 { 00:08:05.309 "dma_device_id": "system", 00:08:05.309 "dma_device_type": 1 00:08:05.309 }, 00:08:05.309 { 00:08:05.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.309 "dma_device_type": 2 00:08:05.309 } 00:08:05.309 ], 00:08:05.309 "driver_specific": {} 00:08:05.309 } 00:08:05.309 ] 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.309 "name": "Existed_Raid", 00:08:05.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.309 "strip_size_kb": 64, 00:08:05.309 "state": "configuring", 00:08:05.309 "raid_level": "concat", 00:08:05.309 "superblock": false, 00:08:05.309 "num_base_bdevs": 3, 00:08:05.309 "num_base_bdevs_discovered": 1, 00:08:05.309 "num_base_bdevs_operational": 3, 00:08:05.309 "base_bdevs_list": [ 00:08:05.309 { 00:08:05.309 "name": "BaseBdev1", 00:08:05.309 "uuid": "442d7533-525f-4cc2-a0e7-daa490a309bc", 00:08:05.309 "is_configured": true, 00:08:05.309 "data_offset": 0, 00:08:05.309 "data_size": 65536 00:08:05.309 }, 00:08:05.309 { 00:08:05.309 "name": "BaseBdev2", 00:08:05.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.309 "is_configured": false, 00:08:05.309 "data_offset": 0, 00:08:05.309 "data_size": 0 00:08:05.309 }, 00:08:05.309 { 00:08:05.309 "name": "BaseBdev3", 00:08:05.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.309 "is_configured": false, 00:08:05.309 "data_offset": 0, 00:08:05.309 "data_size": 0 00:08:05.309 } 00:08:05.309 ] 00:08:05.309 }' 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.309 01:52:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.879 [2024-12-07 01:52:11.056222] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.879 [2024-12-07 01:52:11.056271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.879 [2024-12-07 01:52:11.068252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.879 [2024-12-07 01:52:11.070061] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.879 [2024-12-07 01:52:11.070156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.879 [2024-12-07 01:52:11.070169] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:05.879 [2024-12-07 01:52:11.070179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.879 "name": "Existed_Raid", 00:08:05.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.879 "strip_size_kb": 64, 00:08:05.879 "state": "configuring", 00:08:05.879 "raid_level": "concat", 00:08:05.879 "superblock": false, 00:08:05.879 "num_base_bdevs": 3, 00:08:05.879 "num_base_bdevs_discovered": 1, 00:08:05.879 "num_base_bdevs_operational": 3, 00:08:05.879 "base_bdevs_list": [ 00:08:05.879 { 00:08:05.879 "name": "BaseBdev1", 00:08:05.879 "uuid": "442d7533-525f-4cc2-a0e7-daa490a309bc", 00:08:05.879 "is_configured": true, 00:08:05.879 "data_offset": 0, 00:08:05.879 "data_size": 65536 00:08:05.879 }, 00:08:05.879 { 00:08:05.879 "name": "BaseBdev2", 00:08:05.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.879 "is_configured": false, 00:08:05.879 "data_offset": 0, 00:08:05.879 "data_size": 0 00:08:05.879 }, 00:08:05.879 { 00:08:05.879 "name": "BaseBdev3", 00:08:05.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.879 "is_configured": false, 00:08:05.879 "data_offset": 0, 00:08:05.879 "data_size": 0 00:08:05.879 } 00:08:05.879 ] 00:08:05.879 }' 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.879 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.139 [2024-12-07 01:52:11.440031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.139 BaseBdev2 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.139 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.139 [ 00:08:06.139 { 00:08:06.139 "name": "BaseBdev2", 00:08:06.139 "aliases": [ 00:08:06.139 "b7a41d5a-c7a4-4eab-a8ca-acf15ce7c7e2" 00:08:06.139 ], 00:08:06.139 "product_name": "Malloc disk", 00:08:06.139 "block_size": 512, 00:08:06.139 "num_blocks": 65536, 00:08:06.139 "uuid": "b7a41d5a-c7a4-4eab-a8ca-acf15ce7c7e2", 00:08:06.139 "assigned_rate_limits": { 00:08:06.139 "rw_ios_per_sec": 0, 00:08:06.140 "rw_mbytes_per_sec": 0, 00:08:06.140 "r_mbytes_per_sec": 0, 00:08:06.140 "w_mbytes_per_sec": 0 00:08:06.140 }, 00:08:06.140 "claimed": true, 00:08:06.140 "claim_type": "exclusive_write", 00:08:06.140 "zoned": false, 00:08:06.140 "supported_io_types": { 00:08:06.140 "read": true, 00:08:06.140 "write": true, 00:08:06.140 "unmap": true, 00:08:06.140 "flush": true, 00:08:06.140 "reset": true, 00:08:06.140 "nvme_admin": false, 00:08:06.140 "nvme_io": false, 00:08:06.140 "nvme_io_md": false, 00:08:06.140 "write_zeroes": true, 00:08:06.140 "zcopy": true, 00:08:06.140 "get_zone_info": false, 00:08:06.140 "zone_management": false, 00:08:06.140 "zone_append": false, 00:08:06.140 "compare": false, 00:08:06.140 "compare_and_write": false, 00:08:06.140 "abort": true, 00:08:06.140 "seek_hole": false, 00:08:06.140 "seek_data": false, 00:08:06.140 "copy": true, 00:08:06.140 "nvme_iov_md": false 00:08:06.140 }, 00:08:06.140 "memory_domains": [ 00:08:06.140 { 00:08:06.140 "dma_device_id": "system", 00:08:06.140 "dma_device_type": 1 00:08:06.140 }, 00:08:06.140 { 00:08:06.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.140 "dma_device_type": 2 00:08:06.140 } 00:08:06.140 ], 00:08:06.140 "driver_specific": {} 00:08:06.140 } 00:08:06.140 ] 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.140 "name": "Existed_Raid", 00:08:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.140 "strip_size_kb": 64, 00:08:06.140 "state": "configuring", 00:08:06.140 "raid_level": "concat", 00:08:06.140 "superblock": false, 00:08:06.140 "num_base_bdevs": 3, 00:08:06.140 "num_base_bdevs_discovered": 2, 00:08:06.140 "num_base_bdevs_operational": 3, 00:08:06.140 "base_bdevs_list": [ 00:08:06.140 { 00:08:06.140 "name": "BaseBdev1", 00:08:06.140 "uuid": "442d7533-525f-4cc2-a0e7-daa490a309bc", 00:08:06.140 "is_configured": true, 00:08:06.140 "data_offset": 0, 00:08:06.140 "data_size": 65536 00:08:06.140 }, 00:08:06.140 { 00:08:06.140 "name": "BaseBdev2", 00:08:06.140 "uuid": "b7a41d5a-c7a4-4eab-a8ca-acf15ce7c7e2", 00:08:06.140 "is_configured": true, 00:08:06.140 "data_offset": 0, 00:08:06.140 "data_size": 65536 00:08:06.140 }, 00:08:06.140 { 00:08:06.140 "name": "BaseBdev3", 00:08:06.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.140 "is_configured": false, 00:08:06.140 "data_offset": 0, 00:08:06.140 "data_size": 0 00:08:06.140 } 00:08:06.140 ] 00:08:06.140 }' 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.140 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.710 [2024-12-07 01:52:11.938028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:06.710 [2024-12-07 01:52:11.938127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:06.710 [2024-12-07 01:52:11.938145] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:06.710 [2024-12-07 01:52:11.938438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:06.710 [2024-12-07 01:52:11.938565] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:06.710 [2024-12-07 01:52:11.938581] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:06.710 [2024-12-07 01:52:11.938802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.710 BaseBdev3 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.710 [ 00:08:06.710 { 00:08:06.710 "name": "BaseBdev3", 00:08:06.710 "aliases": [ 00:08:06.710 "e463b5e9-1714-4d38-a72b-314a2ad73a23" 00:08:06.710 ], 00:08:06.710 "product_name": "Malloc disk", 00:08:06.710 "block_size": 512, 00:08:06.710 "num_blocks": 65536, 00:08:06.710 "uuid": "e463b5e9-1714-4d38-a72b-314a2ad73a23", 00:08:06.710 "assigned_rate_limits": { 00:08:06.710 "rw_ios_per_sec": 0, 00:08:06.710 "rw_mbytes_per_sec": 0, 00:08:06.710 "r_mbytes_per_sec": 0, 00:08:06.710 "w_mbytes_per_sec": 0 00:08:06.710 }, 00:08:06.710 "claimed": true, 00:08:06.710 "claim_type": "exclusive_write", 00:08:06.710 "zoned": false, 00:08:06.710 "supported_io_types": { 00:08:06.710 "read": true, 00:08:06.710 "write": true, 00:08:06.710 "unmap": true, 00:08:06.710 "flush": true, 00:08:06.710 "reset": true, 00:08:06.710 "nvme_admin": false, 00:08:06.710 "nvme_io": false, 00:08:06.710 "nvme_io_md": false, 00:08:06.710 "write_zeroes": true, 00:08:06.710 "zcopy": true, 00:08:06.710 "get_zone_info": false, 00:08:06.710 "zone_management": false, 00:08:06.710 "zone_append": false, 00:08:06.710 "compare": false, 00:08:06.710 "compare_and_write": false, 00:08:06.710 "abort": true, 00:08:06.710 "seek_hole": false, 00:08:06.710 "seek_data": false, 00:08:06.710 "copy": true, 00:08:06.710 "nvme_iov_md": false 00:08:06.710 }, 00:08:06.710 "memory_domains": [ 00:08:06.710 { 00:08:06.710 "dma_device_id": "system", 00:08:06.710 "dma_device_type": 1 00:08:06.710 }, 00:08:06.710 { 00:08:06.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.710 "dma_device_type": 2 00:08:06.710 } 00:08:06.710 ], 00:08:06.710 "driver_specific": {} 00:08:06.710 } 00:08:06.710 ] 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.710 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.711 01:52:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.711 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.711 "name": "Existed_Raid", 00:08:06.711 "uuid": "72103208-5fd2-451d-adc1-c708e28fa5c9", 00:08:06.711 "strip_size_kb": 64, 00:08:06.711 "state": "online", 00:08:06.711 "raid_level": "concat", 00:08:06.711 "superblock": false, 00:08:06.711 "num_base_bdevs": 3, 00:08:06.711 "num_base_bdevs_discovered": 3, 00:08:06.711 "num_base_bdevs_operational": 3, 00:08:06.711 "base_bdevs_list": [ 00:08:06.711 { 00:08:06.711 "name": "BaseBdev1", 00:08:06.711 "uuid": "442d7533-525f-4cc2-a0e7-daa490a309bc", 00:08:06.711 "is_configured": true, 00:08:06.711 "data_offset": 0, 00:08:06.711 "data_size": 65536 00:08:06.711 }, 00:08:06.711 { 00:08:06.711 "name": "BaseBdev2", 00:08:06.711 "uuid": "b7a41d5a-c7a4-4eab-a8ca-acf15ce7c7e2", 00:08:06.711 "is_configured": true, 00:08:06.711 "data_offset": 0, 00:08:06.711 "data_size": 65536 00:08:06.711 }, 00:08:06.711 { 00:08:06.711 "name": "BaseBdev3", 00:08:06.711 "uuid": "e463b5e9-1714-4d38-a72b-314a2ad73a23", 00:08:06.711 "is_configured": true, 00:08:06.711 "data_offset": 0, 00:08:06.711 "data_size": 65536 00:08:06.711 } 00:08:06.711 ] 00:08:06.711 }' 00:08:06.711 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.711 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.971 [2024-12-07 01:52:12.345636] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.971 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.971 "name": "Existed_Raid", 00:08:06.971 "aliases": [ 00:08:06.971 "72103208-5fd2-451d-adc1-c708e28fa5c9" 00:08:06.971 ], 00:08:06.971 "product_name": "Raid Volume", 00:08:06.971 "block_size": 512, 00:08:06.971 "num_blocks": 196608, 00:08:06.971 "uuid": "72103208-5fd2-451d-adc1-c708e28fa5c9", 00:08:06.971 "assigned_rate_limits": { 00:08:06.971 "rw_ios_per_sec": 0, 00:08:06.971 "rw_mbytes_per_sec": 0, 00:08:06.971 "r_mbytes_per_sec": 0, 00:08:06.971 "w_mbytes_per_sec": 0 00:08:06.971 }, 00:08:06.971 "claimed": false, 00:08:06.971 "zoned": false, 00:08:06.971 "supported_io_types": { 00:08:06.971 "read": true, 00:08:06.971 "write": true, 00:08:06.971 "unmap": true, 00:08:06.971 "flush": true, 00:08:06.971 "reset": true, 00:08:06.971 "nvme_admin": false, 00:08:06.971 "nvme_io": false, 00:08:06.971 "nvme_io_md": false, 00:08:06.971 "write_zeroes": true, 00:08:06.971 "zcopy": false, 00:08:06.971 "get_zone_info": false, 00:08:06.971 "zone_management": false, 00:08:06.971 "zone_append": false, 00:08:06.971 "compare": false, 00:08:06.971 "compare_and_write": false, 00:08:06.971 "abort": false, 00:08:06.971 "seek_hole": false, 00:08:06.971 "seek_data": false, 00:08:06.971 "copy": false, 00:08:06.971 "nvme_iov_md": false 00:08:06.971 }, 00:08:06.971 "memory_domains": [ 00:08:06.971 { 00:08:06.971 "dma_device_id": "system", 00:08:06.971 "dma_device_type": 1 00:08:06.971 }, 00:08:06.971 { 00:08:06.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.971 "dma_device_type": 2 00:08:06.972 }, 00:08:06.972 { 00:08:06.972 "dma_device_id": "system", 00:08:06.972 "dma_device_type": 1 00:08:06.972 }, 00:08:06.972 { 00:08:06.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.972 "dma_device_type": 2 00:08:06.972 }, 00:08:06.972 { 00:08:06.972 "dma_device_id": "system", 00:08:06.972 "dma_device_type": 1 00:08:06.972 }, 00:08:06.972 { 00:08:06.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.972 "dma_device_type": 2 00:08:06.972 } 00:08:06.972 ], 00:08:06.972 "driver_specific": { 00:08:06.972 "raid": { 00:08:06.972 "uuid": "72103208-5fd2-451d-adc1-c708e28fa5c9", 00:08:06.972 "strip_size_kb": 64, 00:08:06.972 "state": "online", 00:08:06.972 "raid_level": "concat", 00:08:06.972 "superblock": false, 00:08:06.972 "num_base_bdevs": 3, 00:08:06.972 "num_base_bdevs_discovered": 3, 00:08:06.972 "num_base_bdevs_operational": 3, 00:08:06.972 "base_bdevs_list": [ 00:08:06.972 { 00:08:06.972 "name": "BaseBdev1", 00:08:06.972 "uuid": "442d7533-525f-4cc2-a0e7-daa490a309bc", 00:08:06.972 "is_configured": true, 00:08:06.972 "data_offset": 0, 00:08:06.972 "data_size": 65536 00:08:06.972 }, 00:08:06.972 { 00:08:06.972 "name": "BaseBdev2", 00:08:06.972 "uuid": "b7a41d5a-c7a4-4eab-a8ca-acf15ce7c7e2", 00:08:06.972 "is_configured": true, 00:08:06.972 "data_offset": 0, 00:08:06.972 "data_size": 65536 00:08:06.972 }, 00:08:06.972 { 00:08:06.972 "name": "BaseBdev3", 00:08:06.972 "uuid": "e463b5e9-1714-4d38-a72b-314a2ad73a23", 00:08:06.972 "is_configured": true, 00:08:06.972 "data_offset": 0, 00:08:06.972 "data_size": 65536 00:08:06.972 } 00:08:06.972 ] 00:08:06.972 } 00:08:06.972 } 00:08:06.972 }' 00:08:06.972 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:07.233 BaseBdev2 00:08:07.233 BaseBdev3' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.233 [2024-12-07 01:52:12.644902] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:07.233 [2024-12-07 01:52:12.644928] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:07.233 [2024-12-07 01:52:12.644977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.233 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.494 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.494 "name": "Existed_Raid", 00:08:07.494 "uuid": "72103208-5fd2-451d-adc1-c708e28fa5c9", 00:08:07.494 "strip_size_kb": 64, 00:08:07.494 "state": "offline", 00:08:07.494 "raid_level": "concat", 00:08:07.494 "superblock": false, 00:08:07.494 "num_base_bdevs": 3, 00:08:07.494 "num_base_bdevs_discovered": 2, 00:08:07.494 "num_base_bdevs_operational": 2, 00:08:07.494 "base_bdevs_list": [ 00:08:07.494 { 00:08:07.494 "name": null, 00:08:07.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.494 "is_configured": false, 00:08:07.494 "data_offset": 0, 00:08:07.494 "data_size": 65536 00:08:07.494 }, 00:08:07.494 { 00:08:07.494 "name": "BaseBdev2", 00:08:07.494 "uuid": "b7a41d5a-c7a4-4eab-a8ca-acf15ce7c7e2", 00:08:07.494 "is_configured": true, 00:08:07.494 "data_offset": 0, 00:08:07.494 "data_size": 65536 00:08:07.494 }, 00:08:07.494 { 00:08:07.494 "name": "BaseBdev3", 00:08:07.494 "uuid": "e463b5e9-1714-4d38-a72b-314a2ad73a23", 00:08:07.494 "is_configured": true, 00:08:07.494 "data_offset": 0, 00:08:07.494 "data_size": 65536 00:08:07.494 } 00:08:07.494 ] 00:08:07.494 }' 00:08:07.494 01:52:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.494 01:52:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.754 [2024-12-07 01:52:13.191202] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:07.754 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 [2024-12-07 01:52:13.257963] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.015 [2024-12-07 01:52:13.258052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 BaseBdev2 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.015 [ 00:08:08.015 { 00:08:08.015 "name": "BaseBdev2", 00:08:08.015 "aliases": [ 00:08:08.015 "f8cc0aad-f474-47a7-98ba-00773d3ab2ed" 00:08:08.015 ], 00:08:08.015 "product_name": "Malloc disk", 00:08:08.015 "block_size": 512, 00:08:08.015 "num_blocks": 65536, 00:08:08.015 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:08.015 "assigned_rate_limits": { 00:08:08.015 "rw_ios_per_sec": 0, 00:08:08.015 "rw_mbytes_per_sec": 0, 00:08:08.015 "r_mbytes_per_sec": 0, 00:08:08.015 "w_mbytes_per_sec": 0 00:08:08.015 }, 00:08:08.015 "claimed": false, 00:08:08.015 "zoned": false, 00:08:08.015 "supported_io_types": { 00:08:08.015 "read": true, 00:08:08.015 "write": true, 00:08:08.015 "unmap": true, 00:08:08.015 "flush": true, 00:08:08.015 "reset": true, 00:08:08.015 "nvme_admin": false, 00:08:08.015 "nvme_io": false, 00:08:08.015 "nvme_io_md": false, 00:08:08.015 "write_zeroes": true, 00:08:08.015 "zcopy": true, 00:08:08.015 "get_zone_info": false, 00:08:08.015 "zone_management": false, 00:08:08.015 "zone_append": false, 00:08:08.015 "compare": false, 00:08:08.015 "compare_and_write": false, 00:08:08.015 "abort": true, 00:08:08.015 "seek_hole": false, 00:08:08.015 "seek_data": false, 00:08:08.015 "copy": true, 00:08:08.015 "nvme_iov_md": false 00:08:08.015 }, 00:08:08.015 "memory_domains": [ 00:08:08.015 { 00:08:08.015 "dma_device_id": "system", 00:08:08.015 "dma_device_type": 1 00:08:08.015 }, 00:08:08.015 { 00:08:08.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.015 "dma_device_type": 2 00:08:08.015 } 00:08:08.015 ], 00:08:08.015 "driver_specific": {} 00:08:08.015 } 00:08:08.015 ] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.015 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 BaseBdev3 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 [ 00:08:08.016 { 00:08:08.016 "name": "BaseBdev3", 00:08:08.016 "aliases": [ 00:08:08.016 "e76e6ada-b62e-4096-a76b-a6faf4f58588" 00:08:08.016 ], 00:08:08.016 "product_name": "Malloc disk", 00:08:08.016 "block_size": 512, 00:08:08.016 "num_blocks": 65536, 00:08:08.016 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:08.016 "assigned_rate_limits": { 00:08:08.016 "rw_ios_per_sec": 0, 00:08:08.016 "rw_mbytes_per_sec": 0, 00:08:08.016 "r_mbytes_per_sec": 0, 00:08:08.016 "w_mbytes_per_sec": 0 00:08:08.016 }, 00:08:08.016 "claimed": false, 00:08:08.016 "zoned": false, 00:08:08.016 "supported_io_types": { 00:08:08.016 "read": true, 00:08:08.016 "write": true, 00:08:08.016 "unmap": true, 00:08:08.016 "flush": true, 00:08:08.016 "reset": true, 00:08:08.016 "nvme_admin": false, 00:08:08.016 "nvme_io": false, 00:08:08.016 "nvme_io_md": false, 00:08:08.016 "write_zeroes": true, 00:08:08.016 "zcopy": true, 00:08:08.016 "get_zone_info": false, 00:08:08.016 "zone_management": false, 00:08:08.016 "zone_append": false, 00:08:08.016 "compare": false, 00:08:08.016 "compare_and_write": false, 00:08:08.016 "abort": true, 00:08:08.016 "seek_hole": false, 00:08:08.016 "seek_data": false, 00:08:08.016 "copy": true, 00:08:08.016 "nvme_iov_md": false 00:08:08.016 }, 00:08:08.016 "memory_domains": [ 00:08:08.016 { 00:08:08.016 "dma_device_id": "system", 00:08:08.016 "dma_device_type": 1 00:08:08.016 }, 00:08:08.016 { 00:08:08.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.016 "dma_device_type": 2 00:08:08.016 } 00:08:08.016 ], 00:08:08.016 "driver_specific": {} 00:08:08.016 } 00:08:08.016 ] 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 [2024-12-07 01:52:13.421287] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.016 [2024-12-07 01:52:13.421381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.016 [2024-12-07 01:52:13.421419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.016 [2024-12-07 01:52:13.423215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.016 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.276 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.276 "name": "Existed_Raid", 00:08:08.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.276 "strip_size_kb": 64, 00:08:08.276 "state": "configuring", 00:08:08.276 "raid_level": "concat", 00:08:08.276 "superblock": false, 00:08:08.276 "num_base_bdevs": 3, 00:08:08.276 "num_base_bdevs_discovered": 2, 00:08:08.276 "num_base_bdevs_operational": 3, 00:08:08.276 "base_bdevs_list": [ 00:08:08.276 { 00:08:08.276 "name": "BaseBdev1", 00:08:08.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.276 "is_configured": false, 00:08:08.276 "data_offset": 0, 00:08:08.276 "data_size": 0 00:08:08.276 }, 00:08:08.276 { 00:08:08.276 "name": "BaseBdev2", 00:08:08.276 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:08.276 "is_configured": true, 00:08:08.276 "data_offset": 0, 00:08:08.276 "data_size": 65536 00:08:08.276 }, 00:08:08.276 { 00:08:08.276 "name": "BaseBdev3", 00:08:08.276 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:08.276 "is_configured": true, 00:08:08.276 "data_offset": 0, 00:08:08.276 "data_size": 65536 00:08:08.276 } 00:08:08.276 ] 00:08:08.276 }' 00:08:08.276 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.276 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.536 [2024-12-07 01:52:13.816640] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.536 "name": "Existed_Raid", 00:08:08.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.536 "strip_size_kb": 64, 00:08:08.536 "state": "configuring", 00:08:08.536 "raid_level": "concat", 00:08:08.536 "superblock": false, 00:08:08.536 "num_base_bdevs": 3, 00:08:08.536 "num_base_bdevs_discovered": 1, 00:08:08.536 "num_base_bdevs_operational": 3, 00:08:08.536 "base_bdevs_list": [ 00:08:08.536 { 00:08:08.536 "name": "BaseBdev1", 00:08:08.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.536 "is_configured": false, 00:08:08.536 "data_offset": 0, 00:08:08.536 "data_size": 0 00:08:08.536 }, 00:08:08.536 { 00:08:08.536 "name": null, 00:08:08.536 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:08.536 "is_configured": false, 00:08:08.536 "data_offset": 0, 00:08:08.536 "data_size": 65536 00:08:08.536 }, 00:08:08.536 { 00:08:08.536 "name": "BaseBdev3", 00:08:08.536 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:08.536 "is_configured": true, 00:08:08.536 "data_offset": 0, 00:08:08.536 "data_size": 65536 00:08:08.536 } 00:08:08.536 ] 00:08:08.536 }' 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.536 01:52:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 [2024-12-07 01:52:14.306504] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.106 BaseBdev1 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 [ 00:08:09.106 { 00:08:09.106 "name": "BaseBdev1", 00:08:09.106 "aliases": [ 00:08:09.106 "cab180a9-ee5b-41e1-a6be-d5b8dc117e74" 00:08:09.106 ], 00:08:09.106 "product_name": "Malloc disk", 00:08:09.106 "block_size": 512, 00:08:09.106 "num_blocks": 65536, 00:08:09.106 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:09.106 "assigned_rate_limits": { 00:08:09.106 "rw_ios_per_sec": 0, 00:08:09.106 "rw_mbytes_per_sec": 0, 00:08:09.106 "r_mbytes_per_sec": 0, 00:08:09.106 "w_mbytes_per_sec": 0 00:08:09.106 }, 00:08:09.106 "claimed": true, 00:08:09.106 "claim_type": "exclusive_write", 00:08:09.106 "zoned": false, 00:08:09.106 "supported_io_types": { 00:08:09.106 "read": true, 00:08:09.106 "write": true, 00:08:09.106 "unmap": true, 00:08:09.106 "flush": true, 00:08:09.106 "reset": true, 00:08:09.106 "nvme_admin": false, 00:08:09.106 "nvme_io": false, 00:08:09.106 "nvme_io_md": false, 00:08:09.106 "write_zeroes": true, 00:08:09.106 "zcopy": true, 00:08:09.106 "get_zone_info": false, 00:08:09.106 "zone_management": false, 00:08:09.106 "zone_append": false, 00:08:09.106 "compare": false, 00:08:09.106 "compare_and_write": false, 00:08:09.106 "abort": true, 00:08:09.106 "seek_hole": false, 00:08:09.106 "seek_data": false, 00:08:09.106 "copy": true, 00:08:09.106 "nvme_iov_md": false 00:08:09.106 }, 00:08:09.106 "memory_domains": [ 00:08:09.106 { 00:08:09.106 "dma_device_id": "system", 00:08:09.106 "dma_device_type": 1 00:08:09.106 }, 00:08:09.106 { 00:08:09.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.106 "dma_device_type": 2 00:08:09.106 } 00:08:09.106 ], 00:08:09.106 "driver_specific": {} 00:08:09.106 } 00:08:09.106 ] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.106 "name": "Existed_Raid", 00:08:09.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.106 "strip_size_kb": 64, 00:08:09.106 "state": "configuring", 00:08:09.106 "raid_level": "concat", 00:08:09.106 "superblock": false, 00:08:09.106 "num_base_bdevs": 3, 00:08:09.106 "num_base_bdevs_discovered": 2, 00:08:09.106 "num_base_bdevs_operational": 3, 00:08:09.106 "base_bdevs_list": [ 00:08:09.106 { 00:08:09.106 "name": "BaseBdev1", 00:08:09.106 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:09.106 "is_configured": true, 00:08:09.106 "data_offset": 0, 00:08:09.106 "data_size": 65536 00:08:09.106 }, 00:08:09.106 { 00:08:09.106 "name": null, 00:08:09.106 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:09.106 "is_configured": false, 00:08:09.106 "data_offset": 0, 00:08:09.106 "data_size": 65536 00:08:09.106 }, 00:08:09.106 { 00:08:09.106 "name": "BaseBdev3", 00:08:09.106 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:09.106 "is_configured": true, 00:08:09.106 "data_offset": 0, 00:08:09.106 "data_size": 65536 00:08:09.106 } 00:08:09.106 ] 00:08:09.106 }' 00:08:09.106 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.107 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.366 [2024-12-07 01:52:14.801688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.366 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.626 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.626 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.626 "name": "Existed_Raid", 00:08:09.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.626 "strip_size_kb": 64, 00:08:09.626 "state": "configuring", 00:08:09.626 "raid_level": "concat", 00:08:09.626 "superblock": false, 00:08:09.626 "num_base_bdevs": 3, 00:08:09.626 "num_base_bdevs_discovered": 1, 00:08:09.626 "num_base_bdevs_operational": 3, 00:08:09.626 "base_bdevs_list": [ 00:08:09.626 { 00:08:09.626 "name": "BaseBdev1", 00:08:09.626 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:09.626 "is_configured": true, 00:08:09.626 "data_offset": 0, 00:08:09.626 "data_size": 65536 00:08:09.626 }, 00:08:09.626 { 00:08:09.626 "name": null, 00:08:09.626 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:09.626 "is_configured": false, 00:08:09.626 "data_offset": 0, 00:08:09.626 "data_size": 65536 00:08:09.626 }, 00:08:09.626 { 00:08:09.626 "name": null, 00:08:09.626 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:09.626 "is_configured": false, 00:08:09.626 "data_offset": 0, 00:08:09.626 "data_size": 65536 00:08:09.626 } 00:08:09.626 ] 00:08:09.626 }' 00:08:09.626 01:52:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.626 01:52:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 [2024-12-07 01:52:15.288859] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.885 "name": "Existed_Raid", 00:08:09.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.885 "strip_size_kb": 64, 00:08:09.885 "state": "configuring", 00:08:09.885 "raid_level": "concat", 00:08:09.885 "superblock": false, 00:08:09.885 "num_base_bdevs": 3, 00:08:09.885 "num_base_bdevs_discovered": 2, 00:08:09.885 "num_base_bdevs_operational": 3, 00:08:09.885 "base_bdevs_list": [ 00:08:09.885 { 00:08:09.885 "name": "BaseBdev1", 00:08:09.885 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:09.885 "is_configured": true, 00:08:09.885 "data_offset": 0, 00:08:09.885 "data_size": 65536 00:08:09.885 }, 00:08:09.885 { 00:08:09.885 "name": null, 00:08:09.885 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:09.885 "is_configured": false, 00:08:09.885 "data_offset": 0, 00:08:09.885 "data_size": 65536 00:08:09.885 }, 00:08:09.885 { 00:08:09.885 "name": "BaseBdev3", 00:08:09.885 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:09.885 "is_configured": true, 00:08:09.885 "data_offset": 0, 00:08:09.885 "data_size": 65536 00:08:09.885 } 00:08:09.885 ] 00:08:09.885 }' 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.885 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.453 [2024-12-07 01:52:15.752137] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.453 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.454 "name": "Existed_Raid", 00:08:10.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.454 "strip_size_kb": 64, 00:08:10.454 "state": "configuring", 00:08:10.454 "raid_level": "concat", 00:08:10.454 "superblock": false, 00:08:10.454 "num_base_bdevs": 3, 00:08:10.454 "num_base_bdevs_discovered": 1, 00:08:10.454 "num_base_bdevs_operational": 3, 00:08:10.454 "base_bdevs_list": [ 00:08:10.454 { 00:08:10.454 "name": null, 00:08:10.454 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:10.454 "is_configured": false, 00:08:10.454 "data_offset": 0, 00:08:10.454 "data_size": 65536 00:08:10.454 }, 00:08:10.454 { 00:08:10.454 "name": null, 00:08:10.454 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:10.454 "is_configured": false, 00:08:10.454 "data_offset": 0, 00:08:10.454 "data_size": 65536 00:08:10.454 }, 00:08:10.454 { 00:08:10.454 "name": "BaseBdev3", 00:08:10.454 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:10.454 "is_configured": true, 00:08:10.454 "data_offset": 0, 00:08:10.454 "data_size": 65536 00:08:10.454 } 00:08:10.454 ] 00:08:10.454 }' 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.454 01:52:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.022 [2024-12-07 01:52:16.241661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.022 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.023 "name": "Existed_Raid", 00:08:11.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.023 "strip_size_kb": 64, 00:08:11.023 "state": "configuring", 00:08:11.023 "raid_level": "concat", 00:08:11.023 "superblock": false, 00:08:11.023 "num_base_bdevs": 3, 00:08:11.023 "num_base_bdevs_discovered": 2, 00:08:11.023 "num_base_bdevs_operational": 3, 00:08:11.023 "base_bdevs_list": [ 00:08:11.023 { 00:08:11.023 "name": null, 00:08:11.023 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:11.023 "is_configured": false, 00:08:11.023 "data_offset": 0, 00:08:11.023 "data_size": 65536 00:08:11.023 }, 00:08:11.023 { 00:08:11.023 "name": "BaseBdev2", 00:08:11.023 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:11.023 "is_configured": true, 00:08:11.023 "data_offset": 0, 00:08:11.023 "data_size": 65536 00:08:11.023 }, 00:08:11.023 { 00:08:11.023 "name": "BaseBdev3", 00:08:11.023 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:11.023 "is_configured": true, 00:08:11.023 "data_offset": 0, 00:08:11.023 "data_size": 65536 00:08:11.023 } 00:08:11.023 ] 00:08:11.023 }' 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.023 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:11.281 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cab180a9-ee5b-41e1-a6be-d5b8dc117e74 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.557 [2024-12-07 01:52:16.783510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:11.557 [2024-12-07 01:52:16.783623] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:11.557 [2024-12-07 01:52:16.783650] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:11.557 [2024-12-07 01:52:16.783938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:11.557 [2024-12-07 01:52:16.784098] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:11.557 [2024-12-07 01:52:16.784136] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:11.557 [2024-12-07 01:52:16.784351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.557 NewBaseBdev 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.557 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.557 [ 00:08:11.557 { 00:08:11.557 "name": "NewBaseBdev", 00:08:11.557 "aliases": [ 00:08:11.557 "cab180a9-ee5b-41e1-a6be-d5b8dc117e74" 00:08:11.557 ], 00:08:11.557 "product_name": "Malloc disk", 00:08:11.557 "block_size": 512, 00:08:11.557 "num_blocks": 65536, 00:08:11.557 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:11.557 "assigned_rate_limits": { 00:08:11.557 "rw_ios_per_sec": 0, 00:08:11.557 "rw_mbytes_per_sec": 0, 00:08:11.557 "r_mbytes_per_sec": 0, 00:08:11.558 "w_mbytes_per_sec": 0 00:08:11.558 }, 00:08:11.558 "claimed": true, 00:08:11.558 "claim_type": "exclusive_write", 00:08:11.558 "zoned": false, 00:08:11.558 "supported_io_types": { 00:08:11.558 "read": true, 00:08:11.558 "write": true, 00:08:11.558 "unmap": true, 00:08:11.558 "flush": true, 00:08:11.558 "reset": true, 00:08:11.558 "nvme_admin": false, 00:08:11.558 "nvme_io": false, 00:08:11.558 "nvme_io_md": false, 00:08:11.558 "write_zeroes": true, 00:08:11.558 "zcopy": true, 00:08:11.558 "get_zone_info": false, 00:08:11.558 "zone_management": false, 00:08:11.558 "zone_append": false, 00:08:11.558 "compare": false, 00:08:11.558 "compare_and_write": false, 00:08:11.558 "abort": true, 00:08:11.558 "seek_hole": false, 00:08:11.558 "seek_data": false, 00:08:11.558 "copy": true, 00:08:11.558 "nvme_iov_md": false 00:08:11.558 }, 00:08:11.558 "memory_domains": [ 00:08:11.558 { 00:08:11.558 "dma_device_id": "system", 00:08:11.558 "dma_device_type": 1 00:08:11.558 }, 00:08:11.558 { 00:08:11.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.558 "dma_device_type": 2 00:08:11.558 } 00:08:11.558 ], 00:08:11.558 "driver_specific": {} 00:08:11.558 } 00:08:11.558 ] 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.558 "name": "Existed_Raid", 00:08:11.558 "uuid": "e4cbb5d5-4df9-4fb3-a9b1-48e56d7d2173", 00:08:11.558 "strip_size_kb": 64, 00:08:11.558 "state": "online", 00:08:11.558 "raid_level": "concat", 00:08:11.558 "superblock": false, 00:08:11.558 "num_base_bdevs": 3, 00:08:11.558 "num_base_bdevs_discovered": 3, 00:08:11.558 "num_base_bdevs_operational": 3, 00:08:11.558 "base_bdevs_list": [ 00:08:11.558 { 00:08:11.558 "name": "NewBaseBdev", 00:08:11.558 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:11.558 "is_configured": true, 00:08:11.558 "data_offset": 0, 00:08:11.558 "data_size": 65536 00:08:11.558 }, 00:08:11.558 { 00:08:11.558 "name": "BaseBdev2", 00:08:11.558 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:11.558 "is_configured": true, 00:08:11.558 "data_offset": 0, 00:08:11.558 "data_size": 65536 00:08:11.558 }, 00:08:11.558 { 00:08:11.558 "name": "BaseBdev3", 00:08:11.558 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:11.558 "is_configured": true, 00:08:11.558 "data_offset": 0, 00:08:11.558 "data_size": 65536 00:08:11.558 } 00:08:11.558 ] 00:08:11.558 }' 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.558 01:52:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.817 [2024-12-07 01:52:17.251078] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.817 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:12.076 "name": "Existed_Raid", 00:08:12.076 "aliases": [ 00:08:12.076 "e4cbb5d5-4df9-4fb3-a9b1-48e56d7d2173" 00:08:12.076 ], 00:08:12.076 "product_name": "Raid Volume", 00:08:12.076 "block_size": 512, 00:08:12.076 "num_blocks": 196608, 00:08:12.076 "uuid": "e4cbb5d5-4df9-4fb3-a9b1-48e56d7d2173", 00:08:12.076 "assigned_rate_limits": { 00:08:12.076 "rw_ios_per_sec": 0, 00:08:12.076 "rw_mbytes_per_sec": 0, 00:08:12.076 "r_mbytes_per_sec": 0, 00:08:12.076 "w_mbytes_per_sec": 0 00:08:12.076 }, 00:08:12.076 "claimed": false, 00:08:12.076 "zoned": false, 00:08:12.076 "supported_io_types": { 00:08:12.076 "read": true, 00:08:12.076 "write": true, 00:08:12.076 "unmap": true, 00:08:12.076 "flush": true, 00:08:12.076 "reset": true, 00:08:12.076 "nvme_admin": false, 00:08:12.076 "nvme_io": false, 00:08:12.076 "nvme_io_md": false, 00:08:12.076 "write_zeroes": true, 00:08:12.076 "zcopy": false, 00:08:12.076 "get_zone_info": false, 00:08:12.076 "zone_management": false, 00:08:12.076 "zone_append": false, 00:08:12.076 "compare": false, 00:08:12.076 "compare_and_write": false, 00:08:12.076 "abort": false, 00:08:12.076 "seek_hole": false, 00:08:12.076 "seek_data": false, 00:08:12.076 "copy": false, 00:08:12.076 "nvme_iov_md": false 00:08:12.076 }, 00:08:12.076 "memory_domains": [ 00:08:12.076 { 00:08:12.076 "dma_device_id": "system", 00:08:12.076 "dma_device_type": 1 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.076 "dma_device_type": 2 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "dma_device_id": "system", 00:08:12.076 "dma_device_type": 1 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.076 "dma_device_type": 2 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "dma_device_id": "system", 00:08:12.076 "dma_device_type": 1 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.076 "dma_device_type": 2 00:08:12.076 } 00:08:12.076 ], 00:08:12.076 "driver_specific": { 00:08:12.076 "raid": { 00:08:12.076 "uuid": "e4cbb5d5-4df9-4fb3-a9b1-48e56d7d2173", 00:08:12.076 "strip_size_kb": 64, 00:08:12.076 "state": "online", 00:08:12.076 "raid_level": "concat", 00:08:12.076 "superblock": false, 00:08:12.076 "num_base_bdevs": 3, 00:08:12.076 "num_base_bdevs_discovered": 3, 00:08:12.076 "num_base_bdevs_operational": 3, 00:08:12.076 "base_bdevs_list": [ 00:08:12.076 { 00:08:12.076 "name": "NewBaseBdev", 00:08:12.076 "uuid": "cab180a9-ee5b-41e1-a6be-d5b8dc117e74", 00:08:12.076 "is_configured": true, 00:08:12.076 "data_offset": 0, 00:08:12.076 "data_size": 65536 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "name": "BaseBdev2", 00:08:12.076 "uuid": "f8cc0aad-f474-47a7-98ba-00773d3ab2ed", 00:08:12.076 "is_configured": true, 00:08:12.076 "data_offset": 0, 00:08:12.076 "data_size": 65536 00:08:12.076 }, 00:08:12.076 { 00:08:12.076 "name": "BaseBdev3", 00:08:12.076 "uuid": "e76e6ada-b62e-4096-a76b-a6faf4f58588", 00:08:12.076 "is_configured": true, 00:08:12.076 "data_offset": 0, 00:08:12.076 "data_size": 65536 00:08:12.076 } 00:08:12.076 ] 00:08:12.076 } 00:08:12.076 } 00:08:12.076 }' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:12.076 BaseBdev2 00:08:12.076 BaseBdev3' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.076 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.076 [2024-12-07 01:52:17.534277] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.076 [2024-12-07 01:52:17.534342] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.076 [2024-12-07 01:52:17.534442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.076 [2024-12-07 01:52:17.534513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.076 [2024-12-07 01:52:17.534562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76477 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 76477 ']' 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 76477 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76477 00:08:12.336 killing process with pid 76477 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76477' 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 76477 00:08:12.336 [2024-12-07 01:52:17.584429] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.336 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 76477 00:08:12.336 [2024-12-07 01:52:17.615065] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:12.595 00:08:12.595 real 0m8.708s 00:08:12.595 user 0m14.928s 00:08:12.595 sys 0m1.708s 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.595 ************************************ 00:08:12.595 END TEST raid_state_function_test 00:08:12.595 ************************************ 00:08:12.595 01:52:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:12.595 01:52:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:12.595 01:52:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.595 01:52:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.595 ************************************ 00:08:12.595 START TEST raid_state_function_test_sb 00:08:12.595 ************************************ 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77082 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:12.595 Process raid pid: 77082 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77082' 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77082 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 77082 ']' 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.595 01:52:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.595 [2024-12-07 01:52:18.023878] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:12.595 [2024-12-07 01:52:18.024094] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.854 [2024-12-07 01:52:18.149962] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.854 [2024-12-07 01:52:18.193477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.854 [2024-12-07 01:52:18.235072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.854 [2024-12-07 01:52:18.235155] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 [2024-12-07 01:52:18.848105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.424 [2024-12-07 01:52:18.848171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.424 [2024-12-07 01:52:18.848189] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.424 [2024-12-07 01:52:18.848199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.424 [2024-12-07 01:52:18.848205] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.424 [2024-12-07 01:52:18.848217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.682 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.682 "name": "Existed_Raid", 00:08:13.682 "uuid": "39e0b524-3682-479d-80ba-009b4327eeea", 00:08:13.682 "strip_size_kb": 64, 00:08:13.682 "state": "configuring", 00:08:13.682 "raid_level": "concat", 00:08:13.682 "superblock": true, 00:08:13.682 "num_base_bdevs": 3, 00:08:13.682 "num_base_bdevs_discovered": 0, 00:08:13.682 "num_base_bdevs_operational": 3, 00:08:13.682 "base_bdevs_list": [ 00:08:13.682 { 00:08:13.682 "name": "BaseBdev1", 00:08:13.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.682 "is_configured": false, 00:08:13.682 "data_offset": 0, 00:08:13.682 "data_size": 0 00:08:13.682 }, 00:08:13.682 { 00:08:13.682 "name": "BaseBdev2", 00:08:13.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.682 "is_configured": false, 00:08:13.682 "data_offset": 0, 00:08:13.682 "data_size": 0 00:08:13.682 }, 00:08:13.682 { 00:08:13.682 "name": "BaseBdev3", 00:08:13.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.682 "is_configured": false, 00:08:13.682 "data_offset": 0, 00:08:13.682 "data_size": 0 00:08:13.682 } 00:08:13.682 ] 00:08:13.682 }' 00:08:13.682 01:52:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.682 01:52:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.941 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:13.941 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.942 [2024-12-07 01:52:19.323186] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:13.942 [2024-12-07 01:52:19.323269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.942 [2024-12-07 01:52:19.331202] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.942 [2024-12-07 01:52:19.331280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.942 [2024-12-07 01:52:19.331313] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.942 [2024-12-07 01:52:19.331339] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.942 [2024-12-07 01:52:19.331360] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:13.942 [2024-12-07 01:52:19.331424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.942 [2024-12-07 01:52:19.347770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.942 BaseBdev1 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.942 [ 00:08:13.942 { 00:08:13.942 "name": "BaseBdev1", 00:08:13.942 "aliases": [ 00:08:13.942 "dbd6bad0-4328-42ad-9af3-d9961626fb90" 00:08:13.942 ], 00:08:13.942 "product_name": "Malloc disk", 00:08:13.942 "block_size": 512, 00:08:13.942 "num_blocks": 65536, 00:08:13.942 "uuid": "dbd6bad0-4328-42ad-9af3-d9961626fb90", 00:08:13.942 "assigned_rate_limits": { 00:08:13.942 "rw_ios_per_sec": 0, 00:08:13.942 "rw_mbytes_per_sec": 0, 00:08:13.942 "r_mbytes_per_sec": 0, 00:08:13.942 "w_mbytes_per_sec": 0 00:08:13.942 }, 00:08:13.942 "claimed": true, 00:08:13.942 "claim_type": "exclusive_write", 00:08:13.942 "zoned": false, 00:08:13.942 "supported_io_types": { 00:08:13.942 "read": true, 00:08:13.942 "write": true, 00:08:13.942 "unmap": true, 00:08:13.942 "flush": true, 00:08:13.942 "reset": true, 00:08:13.942 "nvme_admin": false, 00:08:13.942 "nvme_io": false, 00:08:13.942 "nvme_io_md": false, 00:08:13.942 "write_zeroes": true, 00:08:13.942 "zcopy": true, 00:08:13.942 "get_zone_info": false, 00:08:13.942 "zone_management": false, 00:08:13.942 "zone_append": false, 00:08:13.942 "compare": false, 00:08:13.942 "compare_and_write": false, 00:08:13.942 "abort": true, 00:08:13.942 "seek_hole": false, 00:08:13.942 "seek_data": false, 00:08:13.942 "copy": true, 00:08:13.942 "nvme_iov_md": false 00:08:13.942 }, 00:08:13.942 "memory_domains": [ 00:08:13.942 { 00:08:13.942 "dma_device_id": "system", 00:08:13.942 "dma_device_type": 1 00:08:13.942 }, 00:08:13.942 { 00:08:13.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.942 "dma_device_type": 2 00:08:13.942 } 00:08:13.942 ], 00:08:13.942 "driver_specific": {} 00:08:13.942 } 00:08:13.942 ] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.942 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.201 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.201 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.201 "name": "Existed_Raid", 00:08:14.201 "uuid": "10dbaf94-43a2-47d8-80ce-3a5149a2c657", 00:08:14.201 "strip_size_kb": 64, 00:08:14.201 "state": "configuring", 00:08:14.201 "raid_level": "concat", 00:08:14.201 "superblock": true, 00:08:14.201 "num_base_bdevs": 3, 00:08:14.201 "num_base_bdevs_discovered": 1, 00:08:14.201 "num_base_bdevs_operational": 3, 00:08:14.201 "base_bdevs_list": [ 00:08:14.201 { 00:08:14.201 "name": "BaseBdev1", 00:08:14.201 "uuid": "dbd6bad0-4328-42ad-9af3-d9961626fb90", 00:08:14.201 "is_configured": true, 00:08:14.201 "data_offset": 2048, 00:08:14.201 "data_size": 63488 00:08:14.201 }, 00:08:14.201 { 00:08:14.201 "name": "BaseBdev2", 00:08:14.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.201 "is_configured": false, 00:08:14.201 "data_offset": 0, 00:08:14.201 "data_size": 0 00:08:14.201 }, 00:08:14.201 { 00:08:14.201 "name": "BaseBdev3", 00:08:14.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.201 "is_configured": false, 00:08:14.201 "data_offset": 0, 00:08:14.201 "data_size": 0 00:08:14.201 } 00:08:14.201 ] 00:08:14.201 }' 00:08:14.201 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.201 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 [2024-12-07 01:52:19.819018] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.461 [2024-12-07 01:52:19.819061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 [2024-12-07 01:52:19.831049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:14.461 [2024-12-07 01:52:19.832815] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:14.461 [2024-12-07 01:52:19.832854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:14.461 [2024-12-07 01:52:19.832863] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:14.461 [2024-12-07 01:52:19.832872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.461 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.461 "name": "Existed_Raid", 00:08:14.461 "uuid": "06ed9529-877f-4afc-93ed-914cd40c4428", 00:08:14.461 "strip_size_kb": 64, 00:08:14.461 "state": "configuring", 00:08:14.461 "raid_level": "concat", 00:08:14.461 "superblock": true, 00:08:14.461 "num_base_bdevs": 3, 00:08:14.461 "num_base_bdevs_discovered": 1, 00:08:14.462 "num_base_bdevs_operational": 3, 00:08:14.462 "base_bdevs_list": [ 00:08:14.462 { 00:08:14.462 "name": "BaseBdev1", 00:08:14.462 "uuid": "dbd6bad0-4328-42ad-9af3-d9961626fb90", 00:08:14.462 "is_configured": true, 00:08:14.462 "data_offset": 2048, 00:08:14.462 "data_size": 63488 00:08:14.462 }, 00:08:14.462 { 00:08:14.462 "name": "BaseBdev2", 00:08:14.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.462 "is_configured": false, 00:08:14.462 "data_offset": 0, 00:08:14.462 "data_size": 0 00:08:14.462 }, 00:08:14.462 { 00:08:14.462 "name": "BaseBdev3", 00:08:14.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.462 "is_configured": false, 00:08:14.462 "data_offset": 0, 00:08:14.462 "data_size": 0 00:08:14.462 } 00:08:14.462 ] 00:08:14.462 }' 00:08:14.462 01:52:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.462 01:52:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 [2024-12-07 01:52:20.284126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.031 BaseBdev2 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 [ 00:08:15.031 { 00:08:15.031 "name": "BaseBdev2", 00:08:15.031 "aliases": [ 00:08:15.031 "d2995b33-fbee-4b5b-b0dd-a5095656a950" 00:08:15.031 ], 00:08:15.031 "product_name": "Malloc disk", 00:08:15.031 "block_size": 512, 00:08:15.031 "num_blocks": 65536, 00:08:15.031 "uuid": "d2995b33-fbee-4b5b-b0dd-a5095656a950", 00:08:15.031 "assigned_rate_limits": { 00:08:15.031 "rw_ios_per_sec": 0, 00:08:15.031 "rw_mbytes_per_sec": 0, 00:08:15.031 "r_mbytes_per_sec": 0, 00:08:15.031 "w_mbytes_per_sec": 0 00:08:15.031 }, 00:08:15.031 "claimed": true, 00:08:15.031 "claim_type": "exclusive_write", 00:08:15.031 "zoned": false, 00:08:15.031 "supported_io_types": { 00:08:15.031 "read": true, 00:08:15.031 "write": true, 00:08:15.031 "unmap": true, 00:08:15.031 "flush": true, 00:08:15.031 "reset": true, 00:08:15.031 "nvme_admin": false, 00:08:15.031 "nvme_io": false, 00:08:15.031 "nvme_io_md": false, 00:08:15.031 "write_zeroes": true, 00:08:15.031 "zcopy": true, 00:08:15.031 "get_zone_info": false, 00:08:15.031 "zone_management": false, 00:08:15.031 "zone_append": false, 00:08:15.031 "compare": false, 00:08:15.031 "compare_and_write": false, 00:08:15.031 "abort": true, 00:08:15.031 "seek_hole": false, 00:08:15.031 "seek_data": false, 00:08:15.031 "copy": true, 00:08:15.031 "nvme_iov_md": false 00:08:15.031 }, 00:08:15.031 "memory_domains": [ 00:08:15.031 { 00:08:15.031 "dma_device_id": "system", 00:08:15.031 "dma_device_type": 1 00:08:15.031 }, 00:08:15.031 { 00:08:15.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.031 "dma_device_type": 2 00:08:15.031 } 00:08:15.031 ], 00:08:15.031 "driver_specific": {} 00:08:15.031 } 00:08:15.031 ] 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.031 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.031 "name": "Existed_Raid", 00:08:15.031 "uuid": "06ed9529-877f-4afc-93ed-914cd40c4428", 00:08:15.031 "strip_size_kb": 64, 00:08:15.032 "state": "configuring", 00:08:15.032 "raid_level": "concat", 00:08:15.032 "superblock": true, 00:08:15.032 "num_base_bdevs": 3, 00:08:15.032 "num_base_bdevs_discovered": 2, 00:08:15.032 "num_base_bdevs_operational": 3, 00:08:15.032 "base_bdevs_list": [ 00:08:15.032 { 00:08:15.032 "name": "BaseBdev1", 00:08:15.032 "uuid": "dbd6bad0-4328-42ad-9af3-d9961626fb90", 00:08:15.032 "is_configured": true, 00:08:15.032 "data_offset": 2048, 00:08:15.032 "data_size": 63488 00:08:15.032 }, 00:08:15.032 { 00:08:15.032 "name": "BaseBdev2", 00:08:15.032 "uuid": "d2995b33-fbee-4b5b-b0dd-a5095656a950", 00:08:15.032 "is_configured": true, 00:08:15.032 "data_offset": 2048, 00:08:15.032 "data_size": 63488 00:08:15.032 }, 00:08:15.032 { 00:08:15.032 "name": "BaseBdev3", 00:08:15.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.032 "is_configured": false, 00:08:15.032 "data_offset": 0, 00:08:15.032 "data_size": 0 00:08:15.032 } 00:08:15.032 ] 00:08:15.032 }' 00:08:15.032 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.032 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.601 [2024-12-07 01:52:20.790064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:15.601 [2024-12-07 01:52:20.790258] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:15.601 [2024-12-07 01:52:20.790275] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:15.601 [2024-12-07 01:52:20.790544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:15.601 BaseBdev3 00:08:15.601 [2024-12-07 01:52:20.790689] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:15.601 [2024-12-07 01:52:20.790700] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:15.601 [2024-12-07 01:52:20.790828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:15.601 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.602 [ 00:08:15.602 { 00:08:15.602 "name": "BaseBdev3", 00:08:15.602 "aliases": [ 00:08:15.602 "cca697c3-9e55-455a-8854-2c1dff6a8ea0" 00:08:15.602 ], 00:08:15.602 "product_name": "Malloc disk", 00:08:15.602 "block_size": 512, 00:08:15.602 "num_blocks": 65536, 00:08:15.602 "uuid": "cca697c3-9e55-455a-8854-2c1dff6a8ea0", 00:08:15.602 "assigned_rate_limits": { 00:08:15.602 "rw_ios_per_sec": 0, 00:08:15.602 "rw_mbytes_per_sec": 0, 00:08:15.602 "r_mbytes_per_sec": 0, 00:08:15.602 "w_mbytes_per_sec": 0 00:08:15.602 }, 00:08:15.602 "claimed": true, 00:08:15.602 "claim_type": "exclusive_write", 00:08:15.602 "zoned": false, 00:08:15.602 "supported_io_types": { 00:08:15.602 "read": true, 00:08:15.602 "write": true, 00:08:15.602 "unmap": true, 00:08:15.602 "flush": true, 00:08:15.602 "reset": true, 00:08:15.602 "nvme_admin": false, 00:08:15.602 "nvme_io": false, 00:08:15.602 "nvme_io_md": false, 00:08:15.602 "write_zeroes": true, 00:08:15.602 "zcopy": true, 00:08:15.602 "get_zone_info": false, 00:08:15.602 "zone_management": false, 00:08:15.602 "zone_append": false, 00:08:15.602 "compare": false, 00:08:15.602 "compare_and_write": false, 00:08:15.602 "abort": true, 00:08:15.602 "seek_hole": false, 00:08:15.602 "seek_data": false, 00:08:15.602 "copy": true, 00:08:15.602 "nvme_iov_md": false 00:08:15.602 }, 00:08:15.602 "memory_domains": [ 00:08:15.602 { 00:08:15.602 "dma_device_id": "system", 00:08:15.602 "dma_device_type": 1 00:08:15.602 }, 00:08:15.602 { 00:08:15.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.602 "dma_device_type": 2 00:08:15.602 } 00:08:15.602 ], 00:08:15.602 "driver_specific": {} 00:08:15.602 } 00:08:15.602 ] 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.602 "name": "Existed_Raid", 00:08:15.602 "uuid": "06ed9529-877f-4afc-93ed-914cd40c4428", 00:08:15.602 "strip_size_kb": 64, 00:08:15.602 "state": "online", 00:08:15.602 "raid_level": "concat", 00:08:15.602 "superblock": true, 00:08:15.602 "num_base_bdevs": 3, 00:08:15.602 "num_base_bdevs_discovered": 3, 00:08:15.602 "num_base_bdevs_operational": 3, 00:08:15.602 "base_bdevs_list": [ 00:08:15.602 { 00:08:15.602 "name": "BaseBdev1", 00:08:15.602 "uuid": "dbd6bad0-4328-42ad-9af3-d9961626fb90", 00:08:15.602 "is_configured": true, 00:08:15.602 "data_offset": 2048, 00:08:15.602 "data_size": 63488 00:08:15.602 }, 00:08:15.602 { 00:08:15.602 "name": "BaseBdev2", 00:08:15.602 "uuid": "d2995b33-fbee-4b5b-b0dd-a5095656a950", 00:08:15.602 "is_configured": true, 00:08:15.602 "data_offset": 2048, 00:08:15.602 "data_size": 63488 00:08:15.602 }, 00:08:15.602 { 00:08:15.602 "name": "BaseBdev3", 00:08:15.602 "uuid": "cca697c3-9e55-455a-8854-2c1dff6a8ea0", 00:08:15.602 "is_configured": true, 00:08:15.602 "data_offset": 2048, 00:08:15.602 "data_size": 63488 00:08:15.602 } 00:08:15.602 ] 00:08:15.602 }' 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.602 01:52:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.862 [2024-12-07 01:52:21.277535] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.862 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.862 "name": "Existed_Raid", 00:08:15.862 "aliases": [ 00:08:15.862 "06ed9529-877f-4afc-93ed-914cd40c4428" 00:08:15.862 ], 00:08:15.862 "product_name": "Raid Volume", 00:08:15.862 "block_size": 512, 00:08:15.862 "num_blocks": 190464, 00:08:15.862 "uuid": "06ed9529-877f-4afc-93ed-914cd40c4428", 00:08:15.862 "assigned_rate_limits": { 00:08:15.862 "rw_ios_per_sec": 0, 00:08:15.862 "rw_mbytes_per_sec": 0, 00:08:15.862 "r_mbytes_per_sec": 0, 00:08:15.862 "w_mbytes_per_sec": 0 00:08:15.862 }, 00:08:15.862 "claimed": false, 00:08:15.862 "zoned": false, 00:08:15.862 "supported_io_types": { 00:08:15.862 "read": true, 00:08:15.862 "write": true, 00:08:15.862 "unmap": true, 00:08:15.862 "flush": true, 00:08:15.862 "reset": true, 00:08:15.862 "nvme_admin": false, 00:08:15.862 "nvme_io": false, 00:08:15.862 "nvme_io_md": false, 00:08:15.862 "write_zeroes": true, 00:08:15.862 "zcopy": false, 00:08:15.862 "get_zone_info": false, 00:08:15.862 "zone_management": false, 00:08:15.862 "zone_append": false, 00:08:15.862 "compare": false, 00:08:15.862 "compare_and_write": false, 00:08:15.862 "abort": false, 00:08:15.862 "seek_hole": false, 00:08:15.862 "seek_data": false, 00:08:15.862 "copy": false, 00:08:15.862 "nvme_iov_md": false 00:08:15.862 }, 00:08:15.862 "memory_domains": [ 00:08:15.862 { 00:08:15.862 "dma_device_id": "system", 00:08:15.862 "dma_device_type": 1 00:08:15.862 }, 00:08:15.862 { 00:08:15.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.862 "dma_device_type": 2 00:08:15.862 }, 00:08:15.862 { 00:08:15.862 "dma_device_id": "system", 00:08:15.862 "dma_device_type": 1 00:08:15.862 }, 00:08:15.862 { 00:08:15.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.862 "dma_device_type": 2 00:08:15.862 }, 00:08:15.862 { 00:08:15.862 "dma_device_id": "system", 00:08:15.862 "dma_device_type": 1 00:08:15.862 }, 00:08:15.862 { 00:08:15.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.862 "dma_device_type": 2 00:08:15.862 } 00:08:15.862 ], 00:08:15.862 "driver_specific": { 00:08:15.862 "raid": { 00:08:15.862 "uuid": "06ed9529-877f-4afc-93ed-914cd40c4428", 00:08:15.862 "strip_size_kb": 64, 00:08:15.862 "state": "online", 00:08:15.862 "raid_level": "concat", 00:08:15.862 "superblock": true, 00:08:15.863 "num_base_bdevs": 3, 00:08:15.863 "num_base_bdevs_discovered": 3, 00:08:15.863 "num_base_bdevs_operational": 3, 00:08:15.863 "base_bdevs_list": [ 00:08:15.863 { 00:08:15.863 "name": "BaseBdev1", 00:08:15.863 "uuid": "dbd6bad0-4328-42ad-9af3-d9961626fb90", 00:08:15.863 "is_configured": true, 00:08:15.863 "data_offset": 2048, 00:08:15.863 "data_size": 63488 00:08:15.863 }, 00:08:15.863 { 00:08:15.863 "name": "BaseBdev2", 00:08:15.863 "uuid": "d2995b33-fbee-4b5b-b0dd-a5095656a950", 00:08:15.863 "is_configured": true, 00:08:15.863 "data_offset": 2048, 00:08:15.863 "data_size": 63488 00:08:15.863 }, 00:08:15.863 { 00:08:15.863 "name": "BaseBdev3", 00:08:15.863 "uuid": "cca697c3-9e55-455a-8854-2c1dff6a8ea0", 00:08:15.863 "is_configured": true, 00:08:15.863 "data_offset": 2048, 00:08:15.863 "data_size": 63488 00:08:15.863 } 00:08:15.863 ] 00:08:15.863 } 00:08:15.863 } 00:08:15.863 }' 00:08:15.863 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:16.123 BaseBdev2 00:08:16.123 BaseBdev3' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.123 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.124 [2024-12-07 01:52:21.564839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.124 [2024-12-07 01:52:21.564905] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.124 [2024-12-07 01:52:21.564972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.124 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.383 "name": "Existed_Raid", 00:08:16.383 "uuid": "06ed9529-877f-4afc-93ed-914cd40c4428", 00:08:16.383 "strip_size_kb": 64, 00:08:16.383 "state": "offline", 00:08:16.383 "raid_level": "concat", 00:08:16.383 "superblock": true, 00:08:16.383 "num_base_bdevs": 3, 00:08:16.383 "num_base_bdevs_discovered": 2, 00:08:16.383 "num_base_bdevs_operational": 2, 00:08:16.383 "base_bdevs_list": [ 00:08:16.383 { 00:08:16.383 "name": null, 00:08:16.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.383 "is_configured": false, 00:08:16.383 "data_offset": 0, 00:08:16.383 "data_size": 63488 00:08:16.383 }, 00:08:16.383 { 00:08:16.383 "name": "BaseBdev2", 00:08:16.383 "uuid": "d2995b33-fbee-4b5b-b0dd-a5095656a950", 00:08:16.383 "is_configured": true, 00:08:16.383 "data_offset": 2048, 00:08:16.383 "data_size": 63488 00:08:16.383 }, 00:08:16.383 { 00:08:16.383 "name": "BaseBdev3", 00:08:16.383 "uuid": "cca697c3-9e55-455a-8854-2c1dff6a8ea0", 00:08:16.383 "is_configured": true, 00:08:16.383 "data_offset": 2048, 00:08:16.383 "data_size": 63488 00:08:16.383 } 00:08:16.383 ] 00:08:16.383 }' 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.383 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 01:52:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 [2024-12-07 01:52:22.023369] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.642 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.642 [2024-12-07 01:52:22.089932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:16.642 [2024-12-07 01:52:22.089977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.902 BaseBdev2 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.902 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.902 [ 00:08:16.902 { 00:08:16.902 "name": "BaseBdev2", 00:08:16.902 "aliases": [ 00:08:16.902 "b7fcd601-8681-45ed-8bc1-e90cc4592546" 00:08:16.902 ], 00:08:16.902 "product_name": "Malloc disk", 00:08:16.902 "block_size": 512, 00:08:16.902 "num_blocks": 65536, 00:08:16.902 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:16.902 "assigned_rate_limits": { 00:08:16.902 "rw_ios_per_sec": 0, 00:08:16.902 "rw_mbytes_per_sec": 0, 00:08:16.902 "r_mbytes_per_sec": 0, 00:08:16.902 "w_mbytes_per_sec": 0 00:08:16.902 }, 00:08:16.902 "claimed": false, 00:08:16.902 "zoned": false, 00:08:16.902 "supported_io_types": { 00:08:16.902 "read": true, 00:08:16.902 "write": true, 00:08:16.902 "unmap": true, 00:08:16.902 "flush": true, 00:08:16.902 "reset": true, 00:08:16.902 "nvme_admin": false, 00:08:16.902 "nvme_io": false, 00:08:16.902 "nvme_io_md": false, 00:08:16.902 "write_zeroes": true, 00:08:16.902 "zcopy": true, 00:08:16.902 "get_zone_info": false, 00:08:16.902 "zone_management": false, 00:08:16.902 "zone_append": false, 00:08:16.902 "compare": false, 00:08:16.902 "compare_and_write": false, 00:08:16.902 "abort": true, 00:08:16.902 "seek_hole": false, 00:08:16.902 "seek_data": false, 00:08:16.902 "copy": true, 00:08:16.902 "nvme_iov_md": false 00:08:16.902 }, 00:08:16.902 "memory_domains": [ 00:08:16.902 { 00:08:16.902 "dma_device_id": "system", 00:08:16.902 "dma_device_type": 1 00:08:16.902 }, 00:08:16.902 { 00:08:16.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.902 "dma_device_type": 2 00:08:16.903 } 00:08:16.903 ], 00:08:16.903 "driver_specific": {} 00:08:16.903 } 00:08:16.903 ] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.903 BaseBdev3 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.903 [ 00:08:16.903 { 00:08:16.903 "name": "BaseBdev3", 00:08:16.903 "aliases": [ 00:08:16.903 "29983963-5c45-4f73-acda-f0c298ab0736" 00:08:16.903 ], 00:08:16.903 "product_name": "Malloc disk", 00:08:16.903 "block_size": 512, 00:08:16.903 "num_blocks": 65536, 00:08:16.903 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:16.903 "assigned_rate_limits": { 00:08:16.903 "rw_ios_per_sec": 0, 00:08:16.903 "rw_mbytes_per_sec": 0, 00:08:16.903 "r_mbytes_per_sec": 0, 00:08:16.903 "w_mbytes_per_sec": 0 00:08:16.903 }, 00:08:16.903 "claimed": false, 00:08:16.903 "zoned": false, 00:08:16.903 "supported_io_types": { 00:08:16.903 "read": true, 00:08:16.903 "write": true, 00:08:16.903 "unmap": true, 00:08:16.903 "flush": true, 00:08:16.903 "reset": true, 00:08:16.903 "nvme_admin": false, 00:08:16.903 "nvme_io": false, 00:08:16.903 "nvme_io_md": false, 00:08:16.903 "write_zeroes": true, 00:08:16.903 "zcopy": true, 00:08:16.903 "get_zone_info": false, 00:08:16.903 "zone_management": false, 00:08:16.903 "zone_append": false, 00:08:16.903 "compare": false, 00:08:16.903 "compare_and_write": false, 00:08:16.903 "abort": true, 00:08:16.903 "seek_hole": false, 00:08:16.903 "seek_data": false, 00:08:16.903 "copy": true, 00:08:16.903 "nvme_iov_md": false 00:08:16.903 }, 00:08:16.903 "memory_domains": [ 00:08:16.903 { 00:08:16.903 "dma_device_id": "system", 00:08:16.903 "dma_device_type": 1 00:08:16.903 }, 00:08:16.903 { 00:08:16.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.903 "dma_device_type": 2 00:08:16.903 } 00:08:16.903 ], 00:08:16.903 "driver_specific": {} 00:08:16.903 } 00:08:16.903 ] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.903 [2024-12-07 01:52:22.248439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.903 [2024-12-07 01:52:22.248535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.903 [2024-12-07 01:52:22.248574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:16.903 [2024-12-07 01:52:22.250288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.903 "name": "Existed_Raid", 00:08:16.903 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:16.903 "strip_size_kb": 64, 00:08:16.903 "state": "configuring", 00:08:16.903 "raid_level": "concat", 00:08:16.903 "superblock": true, 00:08:16.903 "num_base_bdevs": 3, 00:08:16.903 "num_base_bdevs_discovered": 2, 00:08:16.903 "num_base_bdevs_operational": 3, 00:08:16.903 "base_bdevs_list": [ 00:08:16.903 { 00:08:16.903 "name": "BaseBdev1", 00:08:16.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.903 "is_configured": false, 00:08:16.903 "data_offset": 0, 00:08:16.903 "data_size": 0 00:08:16.903 }, 00:08:16.903 { 00:08:16.903 "name": "BaseBdev2", 00:08:16.903 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:16.903 "is_configured": true, 00:08:16.903 "data_offset": 2048, 00:08:16.903 "data_size": 63488 00:08:16.903 }, 00:08:16.903 { 00:08:16.903 "name": "BaseBdev3", 00:08:16.903 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:16.903 "is_configured": true, 00:08:16.903 "data_offset": 2048, 00:08:16.903 "data_size": 63488 00:08:16.903 } 00:08:16.903 ] 00:08:16.903 }' 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.903 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.470 [2024-12-07 01:52:22.715610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.470 "name": "Existed_Raid", 00:08:17.470 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:17.470 "strip_size_kb": 64, 00:08:17.470 "state": "configuring", 00:08:17.470 "raid_level": "concat", 00:08:17.470 "superblock": true, 00:08:17.470 "num_base_bdevs": 3, 00:08:17.470 "num_base_bdevs_discovered": 1, 00:08:17.470 "num_base_bdevs_operational": 3, 00:08:17.470 "base_bdevs_list": [ 00:08:17.470 { 00:08:17.470 "name": "BaseBdev1", 00:08:17.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.470 "is_configured": false, 00:08:17.470 "data_offset": 0, 00:08:17.470 "data_size": 0 00:08:17.470 }, 00:08:17.470 { 00:08:17.470 "name": null, 00:08:17.470 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:17.470 "is_configured": false, 00:08:17.470 "data_offset": 0, 00:08:17.470 "data_size": 63488 00:08:17.470 }, 00:08:17.470 { 00:08:17.470 "name": "BaseBdev3", 00:08:17.470 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:17.470 "is_configured": true, 00:08:17.470 "data_offset": 2048, 00:08:17.470 "data_size": 63488 00:08:17.470 } 00:08:17.470 ] 00:08:17.470 }' 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.470 01:52:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.729 [2024-12-07 01:52:23.177541] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.729 BaseBdev1 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:17.729 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.730 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.730 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.730 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.730 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.989 [ 00:08:17.989 { 00:08:17.989 "name": "BaseBdev1", 00:08:17.989 "aliases": [ 00:08:17.989 "f63a3307-eb04-498f-a64f-4d600550dc5b" 00:08:17.989 ], 00:08:17.989 "product_name": "Malloc disk", 00:08:17.989 "block_size": 512, 00:08:17.989 "num_blocks": 65536, 00:08:17.989 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:17.989 "assigned_rate_limits": { 00:08:17.989 "rw_ios_per_sec": 0, 00:08:17.989 "rw_mbytes_per_sec": 0, 00:08:17.989 "r_mbytes_per_sec": 0, 00:08:17.989 "w_mbytes_per_sec": 0 00:08:17.989 }, 00:08:17.989 "claimed": true, 00:08:17.989 "claim_type": "exclusive_write", 00:08:17.989 "zoned": false, 00:08:17.989 "supported_io_types": { 00:08:17.989 "read": true, 00:08:17.989 "write": true, 00:08:17.989 "unmap": true, 00:08:17.989 "flush": true, 00:08:17.989 "reset": true, 00:08:17.989 "nvme_admin": false, 00:08:17.989 "nvme_io": false, 00:08:17.989 "nvme_io_md": false, 00:08:17.989 "write_zeroes": true, 00:08:17.989 "zcopy": true, 00:08:17.989 "get_zone_info": false, 00:08:17.989 "zone_management": false, 00:08:17.989 "zone_append": false, 00:08:17.989 "compare": false, 00:08:17.989 "compare_and_write": false, 00:08:17.989 "abort": true, 00:08:17.989 "seek_hole": false, 00:08:17.989 "seek_data": false, 00:08:17.989 "copy": true, 00:08:17.989 "nvme_iov_md": false 00:08:17.989 }, 00:08:17.989 "memory_domains": [ 00:08:17.989 { 00:08:17.989 "dma_device_id": "system", 00:08:17.989 "dma_device_type": 1 00:08:17.989 }, 00:08:17.989 { 00:08:17.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.989 "dma_device_type": 2 00:08:17.989 } 00:08:17.989 ], 00:08:17.989 "driver_specific": {} 00:08:17.989 } 00:08:17.989 ] 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.989 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.989 "name": "Existed_Raid", 00:08:17.989 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:17.989 "strip_size_kb": 64, 00:08:17.989 "state": "configuring", 00:08:17.989 "raid_level": "concat", 00:08:17.989 "superblock": true, 00:08:17.989 "num_base_bdevs": 3, 00:08:17.989 "num_base_bdevs_discovered": 2, 00:08:17.989 "num_base_bdevs_operational": 3, 00:08:17.989 "base_bdevs_list": [ 00:08:17.989 { 00:08:17.989 "name": "BaseBdev1", 00:08:17.989 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:17.989 "is_configured": true, 00:08:17.990 "data_offset": 2048, 00:08:17.990 "data_size": 63488 00:08:17.990 }, 00:08:17.990 { 00:08:17.990 "name": null, 00:08:17.990 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:17.990 "is_configured": false, 00:08:17.990 "data_offset": 0, 00:08:17.990 "data_size": 63488 00:08:17.990 }, 00:08:17.990 { 00:08:17.990 "name": "BaseBdev3", 00:08:17.990 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:17.990 "is_configured": true, 00:08:17.990 "data_offset": 2048, 00:08:17.990 "data_size": 63488 00:08:17.990 } 00:08:17.990 ] 00:08:17.990 }' 00:08:17.990 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.990 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.249 [2024-12-07 01:52:23.696685] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.249 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.512 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.512 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.512 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.512 "name": "Existed_Raid", 00:08:18.512 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:18.512 "strip_size_kb": 64, 00:08:18.512 "state": "configuring", 00:08:18.512 "raid_level": "concat", 00:08:18.512 "superblock": true, 00:08:18.512 "num_base_bdevs": 3, 00:08:18.512 "num_base_bdevs_discovered": 1, 00:08:18.512 "num_base_bdevs_operational": 3, 00:08:18.512 "base_bdevs_list": [ 00:08:18.512 { 00:08:18.512 "name": "BaseBdev1", 00:08:18.512 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:18.512 "is_configured": true, 00:08:18.512 "data_offset": 2048, 00:08:18.512 "data_size": 63488 00:08:18.512 }, 00:08:18.512 { 00:08:18.512 "name": null, 00:08:18.512 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:18.512 "is_configured": false, 00:08:18.512 "data_offset": 0, 00:08:18.512 "data_size": 63488 00:08:18.512 }, 00:08:18.512 { 00:08:18.512 "name": null, 00:08:18.512 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:18.512 "is_configured": false, 00:08:18.512 "data_offset": 0, 00:08:18.512 "data_size": 63488 00:08:18.512 } 00:08:18.512 ] 00:08:18.512 }' 00:08:18.512 01:52:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.512 01:52:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.779 [2024-12-07 01:52:24.163889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.779 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.779 "name": "Existed_Raid", 00:08:18.779 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:18.779 "strip_size_kb": 64, 00:08:18.779 "state": "configuring", 00:08:18.779 "raid_level": "concat", 00:08:18.779 "superblock": true, 00:08:18.779 "num_base_bdevs": 3, 00:08:18.779 "num_base_bdevs_discovered": 2, 00:08:18.779 "num_base_bdevs_operational": 3, 00:08:18.779 "base_bdevs_list": [ 00:08:18.779 { 00:08:18.779 "name": "BaseBdev1", 00:08:18.779 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:18.779 "is_configured": true, 00:08:18.779 "data_offset": 2048, 00:08:18.779 "data_size": 63488 00:08:18.779 }, 00:08:18.779 { 00:08:18.779 "name": null, 00:08:18.779 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:18.779 "is_configured": false, 00:08:18.779 "data_offset": 0, 00:08:18.779 "data_size": 63488 00:08:18.779 }, 00:08:18.779 { 00:08:18.779 "name": "BaseBdev3", 00:08:18.779 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:18.780 "is_configured": true, 00:08:18.780 "data_offset": 2048, 00:08:18.780 "data_size": 63488 00:08:18.780 } 00:08:18.780 ] 00:08:18.780 }' 00:08:18.780 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.780 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.355 [2024-12-07 01:52:24.655121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.355 "name": "Existed_Raid", 00:08:19.355 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:19.355 "strip_size_kb": 64, 00:08:19.355 "state": "configuring", 00:08:19.355 "raid_level": "concat", 00:08:19.355 "superblock": true, 00:08:19.355 "num_base_bdevs": 3, 00:08:19.355 "num_base_bdevs_discovered": 1, 00:08:19.355 "num_base_bdevs_operational": 3, 00:08:19.355 "base_bdevs_list": [ 00:08:19.355 { 00:08:19.355 "name": null, 00:08:19.355 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:19.355 "is_configured": false, 00:08:19.355 "data_offset": 0, 00:08:19.355 "data_size": 63488 00:08:19.355 }, 00:08:19.355 { 00:08:19.355 "name": null, 00:08:19.355 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:19.355 "is_configured": false, 00:08:19.355 "data_offset": 0, 00:08:19.355 "data_size": 63488 00:08:19.355 }, 00:08:19.355 { 00:08:19.355 "name": "BaseBdev3", 00:08:19.355 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:19.355 "is_configured": true, 00:08:19.355 "data_offset": 2048, 00:08:19.355 "data_size": 63488 00:08:19.355 } 00:08:19.355 ] 00:08:19.355 }' 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.355 01:52:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.923 [2024-12-07 01:52:25.136762] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.923 "name": "Existed_Raid", 00:08:19.923 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:19.923 "strip_size_kb": 64, 00:08:19.923 "state": "configuring", 00:08:19.923 "raid_level": "concat", 00:08:19.923 "superblock": true, 00:08:19.923 "num_base_bdevs": 3, 00:08:19.923 "num_base_bdevs_discovered": 2, 00:08:19.923 "num_base_bdevs_operational": 3, 00:08:19.923 "base_bdevs_list": [ 00:08:19.923 { 00:08:19.923 "name": null, 00:08:19.923 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:19.923 "is_configured": false, 00:08:19.923 "data_offset": 0, 00:08:19.923 "data_size": 63488 00:08:19.923 }, 00:08:19.923 { 00:08:19.923 "name": "BaseBdev2", 00:08:19.923 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:19.923 "is_configured": true, 00:08:19.923 "data_offset": 2048, 00:08:19.923 "data_size": 63488 00:08:19.923 }, 00:08:19.923 { 00:08:19.923 "name": "BaseBdev3", 00:08:19.923 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:19.923 "is_configured": true, 00:08:19.923 "data_offset": 2048, 00:08:19.923 "data_size": 63488 00:08:19.923 } 00:08:19.923 ] 00:08:19.923 }' 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.923 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f63a3307-eb04-498f-a64f-4d600550dc5b 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.183 [2024-12-07 01:52:25.634683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:20.183 [2024-12-07 01:52:25.634923] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:20.183 [2024-12-07 01:52:25.634944] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.183 [2024-12-07 01:52:25.635217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:20.183 [2024-12-07 01:52:25.635336] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:20.183 [2024-12-07 01:52:25.635346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:20.183 [2024-12-07 01:52:25.635455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.183 NewBaseBdev 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.183 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 [ 00:08:20.442 { 00:08:20.442 "name": "NewBaseBdev", 00:08:20.442 "aliases": [ 00:08:20.442 "f63a3307-eb04-498f-a64f-4d600550dc5b" 00:08:20.442 ], 00:08:20.442 "product_name": "Malloc disk", 00:08:20.442 "block_size": 512, 00:08:20.442 "num_blocks": 65536, 00:08:20.442 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:20.442 "assigned_rate_limits": { 00:08:20.442 "rw_ios_per_sec": 0, 00:08:20.442 "rw_mbytes_per_sec": 0, 00:08:20.442 "r_mbytes_per_sec": 0, 00:08:20.442 "w_mbytes_per_sec": 0 00:08:20.442 }, 00:08:20.442 "claimed": true, 00:08:20.442 "claim_type": "exclusive_write", 00:08:20.442 "zoned": false, 00:08:20.442 "supported_io_types": { 00:08:20.442 "read": true, 00:08:20.442 "write": true, 00:08:20.442 "unmap": true, 00:08:20.442 "flush": true, 00:08:20.442 "reset": true, 00:08:20.442 "nvme_admin": false, 00:08:20.442 "nvme_io": false, 00:08:20.442 "nvme_io_md": false, 00:08:20.442 "write_zeroes": true, 00:08:20.442 "zcopy": true, 00:08:20.442 "get_zone_info": false, 00:08:20.442 "zone_management": false, 00:08:20.442 "zone_append": false, 00:08:20.442 "compare": false, 00:08:20.442 "compare_and_write": false, 00:08:20.442 "abort": true, 00:08:20.442 "seek_hole": false, 00:08:20.442 "seek_data": false, 00:08:20.442 "copy": true, 00:08:20.442 "nvme_iov_md": false 00:08:20.442 }, 00:08:20.442 "memory_domains": [ 00:08:20.442 { 00:08:20.442 "dma_device_id": "system", 00:08:20.442 "dma_device_type": 1 00:08:20.442 }, 00:08:20.442 { 00:08:20.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.442 "dma_device_type": 2 00:08:20.442 } 00:08:20.442 ], 00:08:20.442 "driver_specific": {} 00:08:20.442 } 00:08:20.442 ] 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.442 "name": "Existed_Raid", 00:08:20.442 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:20.442 "strip_size_kb": 64, 00:08:20.442 "state": "online", 00:08:20.442 "raid_level": "concat", 00:08:20.442 "superblock": true, 00:08:20.442 "num_base_bdevs": 3, 00:08:20.442 "num_base_bdevs_discovered": 3, 00:08:20.442 "num_base_bdevs_operational": 3, 00:08:20.442 "base_bdevs_list": [ 00:08:20.442 { 00:08:20.442 "name": "NewBaseBdev", 00:08:20.442 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:20.442 "is_configured": true, 00:08:20.442 "data_offset": 2048, 00:08:20.442 "data_size": 63488 00:08:20.442 }, 00:08:20.442 { 00:08:20.442 "name": "BaseBdev2", 00:08:20.442 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:20.442 "is_configured": true, 00:08:20.442 "data_offset": 2048, 00:08:20.442 "data_size": 63488 00:08:20.442 }, 00:08:20.442 { 00:08:20.442 "name": "BaseBdev3", 00:08:20.442 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:20.442 "is_configured": true, 00:08:20.442 "data_offset": 2048, 00:08:20.442 "data_size": 63488 00:08:20.442 } 00:08:20.442 ] 00:08:20.442 }' 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.442 01:52:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:20.701 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.702 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.702 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:20.702 [2024-12-07 01:52:26.110166] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:20.702 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.702 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:20.702 "name": "Existed_Raid", 00:08:20.702 "aliases": [ 00:08:20.702 "300fe34c-1af2-4645-9cab-7c37640bf07f" 00:08:20.702 ], 00:08:20.702 "product_name": "Raid Volume", 00:08:20.702 "block_size": 512, 00:08:20.702 "num_blocks": 190464, 00:08:20.702 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:20.702 "assigned_rate_limits": { 00:08:20.702 "rw_ios_per_sec": 0, 00:08:20.702 "rw_mbytes_per_sec": 0, 00:08:20.702 "r_mbytes_per_sec": 0, 00:08:20.702 "w_mbytes_per_sec": 0 00:08:20.702 }, 00:08:20.702 "claimed": false, 00:08:20.702 "zoned": false, 00:08:20.702 "supported_io_types": { 00:08:20.702 "read": true, 00:08:20.702 "write": true, 00:08:20.702 "unmap": true, 00:08:20.702 "flush": true, 00:08:20.702 "reset": true, 00:08:20.702 "nvme_admin": false, 00:08:20.702 "nvme_io": false, 00:08:20.702 "nvme_io_md": false, 00:08:20.702 "write_zeroes": true, 00:08:20.702 "zcopy": false, 00:08:20.702 "get_zone_info": false, 00:08:20.702 "zone_management": false, 00:08:20.702 "zone_append": false, 00:08:20.702 "compare": false, 00:08:20.702 "compare_and_write": false, 00:08:20.702 "abort": false, 00:08:20.702 "seek_hole": false, 00:08:20.702 "seek_data": false, 00:08:20.702 "copy": false, 00:08:20.702 "nvme_iov_md": false 00:08:20.702 }, 00:08:20.702 "memory_domains": [ 00:08:20.702 { 00:08:20.702 "dma_device_id": "system", 00:08:20.702 "dma_device_type": 1 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.702 "dma_device_type": 2 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "dma_device_id": "system", 00:08:20.702 "dma_device_type": 1 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.702 "dma_device_type": 2 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "dma_device_id": "system", 00:08:20.702 "dma_device_type": 1 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.702 "dma_device_type": 2 00:08:20.702 } 00:08:20.702 ], 00:08:20.702 "driver_specific": { 00:08:20.702 "raid": { 00:08:20.702 "uuid": "300fe34c-1af2-4645-9cab-7c37640bf07f", 00:08:20.702 "strip_size_kb": 64, 00:08:20.702 "state": "online", 00:08:20.702 "raid_level": "concat", 00:08:20.702 "superblock": true, 00:08:20.702 "num_base_bdevs": 3, 00:08:20.702 "num_base_bdevs_discovered": 3, 00:08:20.702 "num_base_bdevs_operational": 3, 00:08:20.702 "base_bdevs_list": [ 00:08:20.702 { 00:08:20.702 "name": "NewBaseBdev", 00:08:20.702 "uuid": "f63a3307-eb04-498f-a64f-4d600550dc5b", 00:08:20.702 "is_configured": true, 00:08:20.702 "data_offset": 2048, 00:08:20.702 "data_size": 63488 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "name": "BaseBdev2", 00:08:20.702 "uuid": "b7fcd601-8681-45ed-8bc1-e90cc4592546", 00:08:20.702 "is_configured": true, 00:08:20.702 "data_offset": 2048, 00:08:20.702 "data_size": 63488 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "name": "BaseBdev3", 00:08:20.702 "uuid": "29983963-5c45-4f73-acda-f0c298ab0736", 00:08:20.702 "is_configured": true, 00:08:20.702 "data_offset": 2048, 00:08:20.702 "data_size": 63488 00:08:20.702 } 00:08:20.702 ] 00:08:20.702 } 00:08:20.702 } 00:08:20.702 }' 00:08:20.702 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:20.961 BaseBdev2 00:08:20.961 BaseBdev3' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.961 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.962 [2024-12-07 01:52:26.365418] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:20.962 [2024-12-07 01:52:26.365443] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.962 [2024-12-07 01:52:26.365512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.962 [2024-12-07 01:52:26.365565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.962 [2024-12-07 01:52:26.365578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77082 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 77082 ']' 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 77082 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77082 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77082' 00:08:20.962 killing process with pid 77082 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 77082 00:08:20.962 [2024-12-07 01:52:26.417478] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.962 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 77082 00:08:21.221 [2024-12-07 01:52:26.447235] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.481 01:52:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:21.481 00:08:21.481 real 0m8.753s 00:08:21.481 user 0m14.957s 00:08:21.481 sys 0m1.726s 00:08:21.481 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.481 ************************************ 00:08:21.481 END TEST raid_state_function_test_sb 00:08:21.481 ************************************ 00:08:21.481 01:52:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.481 01:52:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:21.481 01:52:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:21.481 01:52:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.481 01:52:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.481 ************************************ 00:08:21.481 START TEST raid_superblock_test 00:08:21.481 ************************************ 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77682 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77682 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 77682 ']' 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.481 01:52:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.481 [2024-12-07 01:52:26.846435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:21.481 [2024-12-07 01:52:26.846546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77682 ] 00:08:21.741 [2024-12-07 01:52:26.989767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.741 [2024-12-07 01:52:27.033241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.741 [2024-12-07 01:52:27.074114] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.741 [2024-12-07 01:52:27.074152] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.312 malloc1 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.312 [2024-12-07 01:52:27.691523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:22.312 [2024-12-07 01:52:27.691626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.312 [2024-12-07 01:52:27.691668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:22.312 [2024-12-07 01:52:27.691703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.312 [2024-12-07 01:52:27.693858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.312 [2024-12-07 01:52:27.693931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:22.312 pt1 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.312 malloc2 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.312 [2024-12-07 01:52:27.740348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.312 [2024-12-07 01:52:27.740534] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.312 [2024-12-07 01:52:27.740610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:22.312 [2024-12-07 01:52:27.740718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.312 [2024-12-07 01:52:27.744655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.312 [2024-12-07 01:52:27.744782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.312 pt2 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.312 malloc3 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.312 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.572 [2024-12-07 01:52:27.773649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:22.572 [2024-12-07 01:52:27.773743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.572 [2024-12-07 01:52:27.773790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:22.572 [2024-12-07 01:52:27.773819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.572 [2024-12-07 01:52:27.775883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.572 [2024-12-07 01:52:27.775954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:22.572 pt3 00:08:22.572 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.572 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:22.572 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:22.572 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:22.572 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.572 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.572 [2024-12-07 01:52:27.785706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:22.572 [2024-12-07 01:52:27.787498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.572 [2024-12-07 01:52:27.787590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:22.572 [2024-12-07 01:52:27.787762] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:22.572 [2024-12-07 01:52:27.787813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.572 [2024-12-07 01:52:27.788069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:22.572 [2024-12-07 01:52:27.788234] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:22.573 [2024-12-07 01:52:27.788278] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:22.573 [2024-12-07 01:52:27.788427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.573 "name": "raid_bdev1", 00:08:22.573 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:22.573 "strip_size_kb": 64, 00:08:22.573 "state": "online", 00:08:22.573 "raid_level": "concat", 00:08:22.573 "superblock": true, 00:08:22.573 "num_base_bdevs": 3, 00:08:22.573 "num_base_bdevs_discovered": 3, 00:08:22.573 "num_base_bdevs_operational": 3, 00:08:22.573 "base_bdevs_list": [ 00:08:22.573 { 00:08:22.573 "name": "pt1", 00:08:22.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.573 "is_configured": true, 00:08:22.573 "data_offset": 2048, 00:08:22.573 "data_size": 63488 00:08:22.573 }, 00:08:22.573 { 00:08:22.573 "name": "pt2", 00:08:22.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.573 "is_configured": true, 00:08:22.573 "data_offset": 2048, 00:08:22.573 "data_size": 63488 00:08:22.573 }, 00:08:22.573 { 00:08:22.573 "name": "pt3", 00:08:22.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:22.573 "is_configured": true, 00:08:22.573 "data_offset": 2048, 00:08:22.573 "data_size": 63488 00:08:22.573 } 00:08:22.573 ] 00:08:22.573 }' 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.573 01:52:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.833 [2024-12-07 01:52:28.221196] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.833 "name": "raid_bdev1", 00:08:22.833 "aliases": [ 00:08:22.833 "f976b772-5aaf-4303-88a5-9ff078430e84" 00:08:22.833 ], 00:08:22.833 "product_name": "Raid Volume", 00:08:22.833 "block_size": 512, 00:08:22.833 "num_blocks": 190464, 00:08:22.833 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:22.833 "assigned_rate_limits": { 00:08:22.833 "rw_ios_per_sec": 0, 00:08:22.833 "rw_mbytes_per_sec": 0, 00:08:22.833 "r_mbytes_per_sec": 0, 00:08:22.833 "w_mbytes_per_sec": 0 00:08:22.833 }, 00:08:22.833 "claimed": false, 00:08:22.833 "zoned": false, 00:08:22.833 "supported_io_types": { 00:08:22.833 "read": true, 00:08:22.833 "write": true, 00:08:22.833 "unmap": true, 00:08:22.833 "flush": true, 00:08:22.833 "reset": true, 00:08:22.833 "nvme_admin": false, 00:08:22.833 "nvme_io": false, 00:08:22.833 "nvme_io_md": false, 00:08:22.833 "write_zeroes": true, 00:08:22.833 "zcopy": false, 00:08:22.833 "get_zone_info": false, 00:08:22.833 "zone_management": false, 00:08:22.833 "zone_append": false, 00:08:22.833 "compare": false, 00:08:22.833 "compare_and_write": false, 00:08:22.833 "abort": false, 00:08:22.833 "seek_hole": false, 00:08:22.833 "seek_data": false, 00:08:22.833 "copy": false, 00:08:22.833 "nvme_iov_md": false 00:08:22.833 }, 00:08:22.833 "memory_domains": [ 00:08:22.833 { 00:08:22.833 "dma_device_id": "system", 00:08:22.833 "dma_device_type": 1 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.833 "dma_device_type": 2 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "dma_device_id": "system", 00:08:22.833 "dma_device_type": 1 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.833 "dma_device_type": 2 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "dma_device_id": "system", 00:08:22.833 "dma_device_type": 1 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.833 "dma_device_type": 2 00:08:22.833 } 00:08:22.833 ], 00:08:22.833 "driver_specific": { 00:08:22.833 "raid": { 00:08:22.833 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:22.833 "strip_size_kb": 64, 00:08:22.833 "state": "online", 00:08:22.833 "raid_level": "concat", 00:08:22.833 "superblock": true, 00:08:22.833 "num_base_bdevs": 3, 00:08:22.833 "num_base_bdevs_discovered": 3, 00:08:22.833 "num_base_bdevs_operational": 3, 00:08:22.833 "base_bdevs_list": [ 00:08:22.833 { 00:08:22.833 "name": "pt1", 00:08:22.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.833 "is_configured": true, 00:08:22.833 "data_offset": 2048, 00:08:22.833 "data_size": 63488 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "name": "pt2", 00:08:22.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.833 "is_configured": true, 00:08:22.833 "data_offset": 2048, 00:08:22.833 "data_size": 63488 00:08:22.833 }, 00:08:22.833 { 00:08:22.833 "name": "pt3", 00:08:22.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:22.833 "is_configured": true, 00:08:22.833 "data_offset": 2048, 00:08:22.833 "data_size": 63488 00:08:22.833 } 00:08:22.833 ] 00:08:22.833 } 00:08:22.833 } 00:08:22.833 }' 00:08:22.833 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:23.093 pt2 00:08:23.093 pt3' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.093 [2024-12-07 01:52:28.500684] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f976b772-5aaf-4303-88a5-9ff078430e84 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f976b772-5aaf-4303-88a5-9ff078430e84 ']' 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.093 [2024-12-07 01:52:28.544348] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.093 [2024-12-07 01:52:28.544411] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.093 [2024-12-07 01:52:28.544491] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.093 [2024-12-07 01:52:28.544562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.093 [2024-12-07 01:52:28.544584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:23.093 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.354 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.354 [2024-12-07 01:52:28.696140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:23.354 [2024-12-07 01:52:28.698061] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:23.354 [2024-12-07 01:52:28.698102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:23.354 [2024-12-07 01:52:28.698159] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:23.354 [2024-12-07 01:52:28.698198] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:23.354 [2024-12-07 01:52:28.698216] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:23.354 [2024-12-07 01:52:28.698228] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.354 [2024-12-07 01:52:28.698245] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:23.354 request: 00:08:23.354 { 00:08:23.354 "name": "raid_bdev1", 00:08:23.354 "raid_level": "concat", 00:08:23.354 "base_bdevs": [ 00:08:23.354 "malloc1", 00:08:23.354 "malloc2", 00:08:23.354 "malloc3" 00:08:23.354 ], 00:08:23.354 "strip_size_kb": 64, 00:08:23.354 "superblock": false, 00:08:23.354 "method": "bdev_raid_create", 00:08:23.354 "req_id": 1 00:08:23.354 } 00:08:23.354 Got JSON-RPC error response 00:08:23.354 response: 00:08:23.354 { 00:08:23.354 "code": -17, 00:08:23.354 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:23.354 } 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.355 [2024-12-07 01:52:28.763978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:23.355 [2024-12-07 01:52:28.764061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.355 [2024-12-07 01:52:28.764091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:23.355 [2024-12-07 01:52:28.764119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.355 [2024-12-07 01:52:28.766150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.355 [2024-12-07 01:52:28.766235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:23.355 [2024-12-07 01:52:28.766318] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:23.355 [2024-12-07 01:52:28.766365] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:23.355 pt1 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.355 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.615 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.615 "name": "raid_bdev1", 00:08:23.615 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:23.615 "strip_size_kb": 64, 00:08:23.615 "state": "configuring", 00:08:23.615 "raid_level": "concat", 00:08:23.615 "superblock": true, 00:08:23.615 "num_base_bdevs": 3, 00:08:23.615 "num_base_bdevs_discovered": 1, 00:08:23.615 "num_base_bdevs_operational": 3, 00:08:23.615 "base_bdevs_list": [ 00:08:23.615 { 00:08:23.615 "name": "pt1", 00:08:23.615 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.615 "is_configured": true, 00:08:23.615 "data_offset": 2048, 00:08:23.615 "data_size": 63488 00:08:23.615 }, 00:08:23.615 { 00:08:23.615 "name": null, 00:08:23.615 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.615 "is_configured": false, 00:08:23.615 "data_offset": 2048, 00:08:23.615 "data_size": 63488 00:08:23.615 }, 00:08:23.615 { 00:08:23.615 "name": null, 00:08:23.615 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:23.615 "is_configured": false, 00:08:23.615 "data_offset": 2048, 00:08:23.615 "data_size": 63488 00:08:23.615 } 00:08:23.615 ] 00:08:23.615 }' 00:08:23.615 01:52:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.615 01:52:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.874 [2024-12-07 01:52:29.211250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:23.874 [2024-12-07 01:52:29.211375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.874 [2024-12-07 01:52:29.211400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:23.874 [2024-12-07 01:52:29.211413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.874 [2024-12-07 01:52:29.211811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.874 [2024-12-07 01:52:29.211831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:23.874 [2024-12-07 01:52:29.211904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:23.874 [2024-12-07 01:52:29.211928] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:23.874 pt2 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.874 [2024-12-07 01:52:29.223215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.874 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.875 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.875 "name": "raid_bdev1", 00:08:23.875 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:23.875 "strip_size_kb": 64, 00:08:23.875 "state": "configuring", 00:08:23.875 "raid_level": "concat", 00:08:23.875 "superblock": true, 00:08:23.875 "num_base_bdevs": 3, 00:08:23.875 "num_base_bdevs_discovered": 1, 00:08:23.875 "num_base_bdevs_operational": 3, 00:08:23.875 "base_bdevs_list": [ 00:08:23.875 { 00:08:23.875 "name": "pt1", 00:08:23.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.875 "is_configured": true, 00:08:23.875 "data_offset": 2048, 00:08:23.875 "data_size": 63488 00:08:23.875 }, 00:08:23.875 { 00:08:23.875 "name": null, 00:08:23.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.875 "is_configured": false, 00:08:23.875 "data_offset": 0, 00:08:23.875 "data_size": 63488 00:08:23.875 }, 00:08:23.875 { 00:08:23.875 "name": null, 00:08:23.875 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:23.875 "is_configured": false, 00:08:23.875 "data_offset": 2048, 00:08:23.875 "data_size": 63488 00:08:23.875 } 00:08:23.875 ] 00:08:23.875 }' 00:08:23.875 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.875 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 [2024-12-07 01:52:29.658464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.444 [2024-12-07 01:52:29.658573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.444 [2024-12-07 01:52:29.658608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:24.444 [2024-12-07 01:52:29.658634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.444 [2024-12-07 01:52:29.659024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.444 [2024-12-07 01:52:29.659077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.444 [2024-12-07 01:52:29.659166] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:24.444 [2024-12-07 01:52:29.659211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.444 pt2 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 [2024-12-07 01:52:29.670441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:24.444 [2024-12-07 01:52:29.670545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.444 [2024-12-07 01:52:29.670576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:24.444 [2024-12-07 01:52:29.670602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.444 [2024-12-07 01:52:29.670928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.444 [2024-12-07 01:52:29.670949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:24.444 [2024-12-07 01:52:29.671011] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:24.444 [2024-12-07 01:52:29.671038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:24.444 [2024-12-07 01:52:29.671127] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:24.444 [2024-12-07 01:52:29.671134] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:24.444 [2024-12-07 01:52:29.671338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:24.444 [2024-12-07 01:52:29.671433] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:24.444 [2024-12-07 01:52:29.671449] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:24.444 [2024-12-07 01:52:29.671539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.444 pt3 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.444 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.444 "name": "raid_bdev1", 00:08:24.444 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:24.444 "strip_size_kb": 64, 00:08:24.444 "state": "online", 00:08:24.444 "raid_level": "concat", 00:08:24.445 "superblock": true, 00:08:24.445 "num_base_bdevs": 3, 00:08:24.445 "num_base_bdevs_discovered": 3, 00:08:24.445 "num_base_bdevs_operational": 3, 00:08:24.445 "base_bdevs_list": [ 00:08:24.445 { 00:08:24.445 "name": "pt1", 00:08:24.445 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.445 "is_configured": true, 00:08:24.445 "data_offset": 2048, 00:08:24.445 "data_size": 63488 00:08:24.445 }, 00:08:24.445 { 00:08:24.445 "name": "pt2", 00:08:24.445 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.445 "is_configured": true, 00:08:24.445 "data_offset": 2048, 00:08:24.445 "data_size": 63488 00:08:24.445 }, 00:08:24.445 { 00:08:24.445 "name": "pt3", 00:08:24.445 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.445 "is_configured": true, 00:08:24.445 "data_offset": 2048, 00:08:24.445 "data_size": 63488 00:08:24.445 } 00:08:24.445 ] 00:08:24.445 }' 00:08:24.445 01:52:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.445 01:52:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.704 [2024-12-07 01:52:30.145958] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.704 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.963 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.963 "name": "raid_bdev1", 00:08:24.963 "aliases": [ 00:08:24.963 "f976b772-5aaf-4303-88a5-9ff078430e84" 00:08:24.963 ], 00:08:24.963 "product_name": "Raid Volume", 00:08:24.963 "block_size": 512, 00:08:24.963 "num_blocks": 190464, 00:08:24.963 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:24.963 "assigned_rate_limits": { 00:08:24.963 "rw_ios_per_sec": 0, 00:08:24.963 "rw_mbytes_per_sec": 0, 00:08:24.963 "r_mbytes_per_sec": 0, 00:08:24.963 "w_mbytes_per_sec": 0 00:08:24.963 }, 00:08:24.963 "claimed": false, 00:08:24.963 "zoned": false, 00:08:24.963 "supported_io_types": { 00:08:24.963 "read": true, 00:08:24.963 "write": true, 00:08:24.963 "unmap": true, 00:08:24.963 "flush": true, 00:08:24.963 "reset": true, 00:08:24.963 "nvme_admin": false, 00:08:24.963 "nvme_io": false, 00:08:24.963 "nvme_io_md": false, 00:08:24.963 "write_zeroes": true, 00:08:24.963 "zcopy": false, 00:08:24.963 "get_zone_info": false, 00:08:24.963 "zone_management": false, 00:08:24.963 "zone_append": false, 00:08:24.963 "compare": false, 00:08:24.963 "compare_and_write": false, 00:08:24.963 "abort": false, 00:08:24.963 "seek_hole": false, 00:08:24.963 "seek_data": false, 00:08:24.963 "copy": false, 00:08:24.963 "nvme_iov_md": false 00:08:24.963 }, 00:08:24.963 "memory_domains": [ 00:08:24.963 { 00:08:24.963 "dma_device_id": "system", 00:08:24.963 "dma_device_type": 1 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.963 "dma_device_type": 2 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "dma_device_id": "system", 00:08:24.963 "dma_device_type": 1 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.963 "dma_device_type": 2 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "dma_device_id": "system", 00:08:24.963 "dma_device_type": 1 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.963 "dma_device_type": 2 00:08:24.963 } 00:08:24.963 ], 00:08:24.963 "driver_specific": { 00:08:24.963 "raid": { 00:08:24.963 "uuid": "f976b772-5aaf-4303-88a5-9ff078430e84", 00:08:24.963 "strip_size_kb": 64, 00:08:24.963 "state": "online", 00:08:24.963 "raid_level": "concat", 00:08:24.963 "superblock": true, 00:08:24.963 "num_base_bdevs": 3, 00:08:24.963 "num_base_bdevs_discovered": 3, 00:08:24.963 "num_base_bdevs_operational": 3, 00:08:24.963 "base_bdevs_list": [ 00:08:24.963 { 00:08:24.963 "name": "pt1", 00:08:24.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:24.963 "is_configured": true, 00:08:24.963 "data_offset": 2048, 00:08:24.963 "data_size": 63488 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "name": "pt2", 00:08:24.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.963 "is_configured": true, 00:08:24.963 "data_offset": 2048, 00:08:24.963 "data_size": 63488 00:08:24.963 }, 00:08:24.963 { 00:08:24.963 "name": "pt3", 00:08:24.963 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:24.963 "is_configured": true, 00:08:24.963 "data_offset": 2048, 00:08:24.963 "data_size": 63488 00:08:24.963 } 00:08:24.963 ] 00:08:24.963 } 00:08:24.963 } 00:08:24.963 }' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:24.964 pt2 00:08:24.964 pt3' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.964 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:24.964 [2024-12-07 01:52:30.421444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f976b772-5aaf-4303-88a5-9ff078430e84 '!=' f976b772-5aaf-4303-88a5-9ff078430e84 ']' 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77682 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 77682 ']' 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 77682 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77682 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77682' 00:08:25.224 killing process with pid 77682 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 77682 00:08:25.224 [2024-12-07 01:52:30.511915] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.224 [2024-12-07 01:52:30.511993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.224 [2024-12-07 01:52:30.512056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.224 [2024-12-07 01:52:30.512066] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:25.224 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 77682 00:08:25.224 [2024-12-07 01:52:30.543969] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.484 01:52:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:25.484 00:08:25.484 real 0m4.017s 00:08:25.484 user 0m6.346s 00:08:25.484 sys 0m0.826s 00:08:25.484 ************************************ 00:08:25.484 END TEST raid_superblock_test 00:08:25.484 ************************************ 00:08:25.484 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.484 01:52:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.484 01:52:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:25.484 01:52:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:25.484 01:52:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.484 01:52:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.484 ************************************ 00:08:25.484 START TEST raid_read_error_test 00:08:25.484 ************************************ 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oREPTf5VHp 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77922 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77922 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 77922 ']' 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.484 01:52:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.744 [2024-12-07 01:52:30.953438] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.744 [2024-12-07 01:52:30.953554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77922 ] 00:08:25.744 [2024-12-07 01:52:31.096867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.744 [2024-12-07 01:52:31.141386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.744 [2024-12-07 01:52:31.182253] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.744 [2024-12-07 01:52:31.182299] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 BaseBdev1_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 true 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 [2024-12-07 01:52:31.815578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:26.684 [2024-12-07 01:52:31.815641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.684 [2024-12-07 01:52:31.815676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:26.684 [2024-12-07 01:52:31.815692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.684 [2024-12-07 01:52:31.817789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.684 [2024-12-07 01:52:31.817867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:26.684 BaseBdev1 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 BaseBdev2_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 true 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 [2024-12-07 01:52:31.873034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:26.684 [2024-12-07 01:52:31.873108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.684 [2024-12-07 01:52:31.873137] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:26.684 [2024-12-07 01:52:31.873150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.684 [2024-12-07 01:52:31.876142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.684 [2024-12-07 01:52:31.876190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:26.684 BaseBdev2 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 BaseBdev3_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 true 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 [2024-12-07 01:52:31.913543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:26.684 [2024-12-07 01:52:31.913585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.684 [2024-12-07 01:52:31.913618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:26.684 [2024-12-07 01:52:31.913626] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.684 [2024-12-07 01:52:31.915644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.684 [2024-12-07 01:52:31.915691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:26.684 BaseBdev3 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 [2024-12-07 01:52:31.925595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.684 [2024-12-07 01:52:31.927397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.684 [2024-12-07 01:52:31.927467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.684 [2024-12-07 01:52:31.927641] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:26.684 [2024-12-07 01:52:31.927654] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.684 [2024-12-07 01:52:31.927927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:26.684 [2024-12-07 01:52:31.928071] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:26.684 [2024-12-07 01:52:31.928096] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:26.684 [2024-12-07 01:52:31.928207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.684 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.684 "name": "raid_bdev1", 00:08:26.684 "uuid": "6e7254ed-3dfc-404c-a777-8ed80822cdd5", 00:08:26.684 "strip_size_kb": 64, 00:08:26.684 "state": "online", 00:08:26.684 "raid_level": "concat", 00:08:26.684 "superblock": true, 00:08:26.684 "num_base_bdevs": 3, 00:08:26.684 "num_base_bdevs_discovered": 3, 00:08:26.684 "num_base_bdevs_operational": 3, 00:08:26.684 "base_bdevs_list": [ 00:08:26.684 { 00:08:26.684 "name": "BaseBdev1", 00:08:26.684 "uuid": "01f8cbaf-0cc7-5ada-9118-eeb41bb695a1", 00:08:26.684 "is_configured": true, 00:08:26.684 "data_offset": 2048, 00:08:26.684 "data_size": 63488 00:08:26.684 }, 00:08:26.684 { 00:08:26.684 "name": "BaseBdev2", 00:08:26.684 "uuid": "0fd5fe1b-9645-590b-ade8-00234ec06725", 00:08:26.684 "is_configured": true, 00:08:26.684 "data_offset": 2048, 00:08:26.684 "data_size": 63488 00:08:26.684 }, 00:08:26.684 { 00:08:26.684 "name": "BaseBdev3", 00:08:26.684 "uuid": "229dbde6-d0c7-5b72-97fa-3b7a9b805a8c", 00:08:26.684 "is_configured": true, 00:08:26.684 "data_offset": 2048, 00:08:26.685 "data_size": 63488 00:08:26.685 } 00:08:26.685 ] 00:08:26.685 }' 00:08:26.685 01:52:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.685 01:52:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.943 01:52:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.943 01:52:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:27.202 [2024-12-07 01:52:32.473016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.138 "name": "raid_bdev1", 00:08:28.138 "uuid": "6e7254ed-3dfc-404c-a777-8ed80822cdd5", 00:08:28.138 "strip_size_kb": 64, 00:08:28.138 "state": "online", 00:08:28.138 "raid_level": "concat", 00:08:28.138 "superblock": true, 00:08:28.138 "num_base_bdevs": 3, 00:08:28.138 "num_base_bdevs_discovered": 3, 00:08:28.138 "num_base_bdevs_operational": 3, 00:08:28.138 "base_bdevs_list": [ 00:08:28.138 { 00:08:28.138 "name": "BaseBdev1", 00:08:28.138 "uuid": "01f8cbaf-0cc7-5ada-9118-eeb41bb695a1", 00:08:28.138 "is_configured": true, 00:08:28.138 "data_offset": 2048, 00:08:28.138 "data_size": 63488 00:08:28.138 }, 00:08:28.138 { 00:08:28.138 "name": "BaseBdev2", 00:08:28.138 "uuid": "0fd5fe1b-9645-590b-ade8-00234ec06725", 00:08:28.138 "is_configured": true, 00:08:28.138 "data_offset": 2048, 00:08:28.138 "data_size": 63488 00:08:28.138 }, 00:08:28.138 { 00:08:28.138 "name": "BaseBdev3", 00:08:28.138 "uuid": "229dbde6-d0c7-5b72-97fa-3b7a9b805a8c", 00:08:28.138 "is_configured": true, 00:08:28.138 "data_offset": 2048, 00:08:28.138 "data_size": 63488 00:08:28.138 } 00:08:28.138 ] 00:08:28.138 }' 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.138 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.397 [2024-12-07 01:52:33.828721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.397 [2024-12-07 01:52:33.828754] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.397 [2024-12-07 01:52:33.831202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.397 [2024-12-07 01:52:33.831259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.397 [2024-12-07 01:52:33.831293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.397 [2024-12-07 01:52:33.831305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:28.397 { 00:08:28.397 "results": [ 00:08:28.397 { 00:08:28.397 "job": "raid_bdev1", 00:08:28.397 "core_mask": "0x1", 00:08:28.397 "workload": "randrw", 00:08:28.397 "percentage": 50, 00:08:28.397 "status": "finished", 00:08:28.397 "queue_depth": 1, 00:08:28.397 "io_size": 131072, 00:08:28.397 "runtime": 1.356442, 00:08:28.397 "iops": 17198.671229584455, 00:08:28.397 "mibps": 2149.833903698057, 00:08:28.397 "io_failed": 1, 00:08:28.397 "io_timeout": 0, 00:08:28.397 "avg_latency_us": 80.5530857246606, 00:08:28.397 "min_latency_us": 24.705676855895195, 00:08:28.397 "max_latency_us": 1380.8349344978167 00:08:28.397 } 00:08:28.397 ], 00:08:28.397 "core_count": 1 00:08:28.397 } 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77922 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 77922 ']' 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 77922 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.397 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77922 00:08:28.656 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:28.656 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:28.656 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77922' 00:08:28.656 killing process with pid 77922 00:08:28.656 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 77922 00:08:28.656 [2024-12-07 01:52:33.869949] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.656 01:52:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 77922 00:08:28.656 [2024-12-07 01:52:33.894585] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oREPTf5VHp 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:28.916 00:08:28.916 real 0m3.280s 00:08:28.916 user 0m4.149s 00:08:28.916 sys 0m0.525s 00:08:28.916 ************************************ 00:08:28.916 END TEST raid_read_error_test 00:08:28.916 ************************************ 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.916 01:52:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.916 01:52:34 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:28.916 01:52:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:28.916 01:52:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.916 01:52:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.916 ************************************ 00:08:28.916 START TEST raid_write_error_test 00:08:28.916 ************************************ 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9St2r2bqn1 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78057 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78057 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 78057 ']' 00:08:28.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:28.916 01:52:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.916 [2024-12-07 01:52:34.300877] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:28.916 [2024-12-07 01:52:34.300981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78057 ] 00:08:29.176 [2024-12-07 01:52:34.444578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.176 [2024-12-07 01:52:34.489740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.176 [2024-12-07 01:52:34.530967] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.176 [2024-12-07 01:52:34.531007] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 BaseBdev1_malloc 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 true 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 [2024-12-07 01:52:35.156651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:29.766 [2024-12-07 01:52:35.156741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.766 [2024-12-07 01:52:35.156773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:29.766 [2024-12-07 01:52:35.156782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.766 [2024-12-07 01:52:35.158853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.766 [2024-12-07 01:52:35.158890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:29.766 BaseBdev1 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 BaseBdev2_malloc 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 true 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.766 [2024-12-07 01:52:35.208035] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:29.766 [2024-12-07 01:52:35.208132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.766 [2024-12-07 01:52:35.208156] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:29.766 [2024-12-07 01:52:35.208164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.766 [2024-12-07 01:52:35.210173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.766 [2024-12-07 01:52:35.210209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:29.766 BaseBdev2 00:08:29.766 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.767 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:29.767 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:29.767 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.767 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.026 BaseBdev3_malloc 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.026 true 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.026 [2024-12-07 01:52:35.248495] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:30.026 [2024-12-07 01:52:35.248543] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.026 [2024-12-07 01:52:35.248561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:30.026 [2024-12-07 01:52:35.248569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.026 [2024-12-07 01:52:35.250574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.026 [2024-12-07 01:52:35.250674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:30.026 BaseBdev3 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.026 [2024-12-07 01:52:35.260544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.026 [2024-12-07 01:52:35.262431] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.026 [2024-12-07 01:52:35.262556] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.026 [2024-12-07 01:52:35.262760] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:30.026 [2024-12-07 01:52:35.262808] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.026 [2024-12-07 01:52:35.263068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:30.026 [2024-12-07 01:52:35.263275] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:30.026 [2024-12-07 01:52:35.263290] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:30.026 [2024-12-07 01:52:35.263433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.026 "name": "raid_bdev1", 00:08:30.026 "uuid": "77ac217c-7ff5-4d9b-a2d9-4d3c6c2142db", 00:08:30.026 "strip_size_kb": 64, 00:08:30.026 "state": "online", 00:08:30.026 "raid_level": "concat", 00:08:30.026 "superblock": true, 00:08:30.026 "num_base_bdevs": 3, 00:08:30.026 "num_base_bdevs_discovered": 3, 00:08:30.026 "num_base_bdevs_operational": 3, 00:08:30.026 "base_bdevs_list": [ 00:08:30.026 { 00:08:30.026 "name": "BaseBdev1", 00:08:30.026 "uuid": "5dfa07ea-87e2-57c9-b4eb-1585bad39ef3", 00:08:30.026 "is_configured": true, 00:08:30.026 "data_offset": 2048, 00:08:30.026 "data_size": 63488 00:08:30.026 }, 00:08:30.026 { 00:08:30.026 "name": "BaseBdev2", 00:08:30.026 "uuid": "d6e6941f-36ed-5d2f-b137-a0b0ec1df53c", 00:08:30.026 "is_configured": true, 00:08:30.026 "data_offset": 2048, 00:08:30.026 "data_size": 63488 00:08:30.026 }, 00:08:30.026 { 00:08:30.026 "name": "BaseBdev3", 00:08:30.026 "uuid": "fa043562-ba2f-5fd7-9206-a87539e3204f", 00:08:30.026 "is_configured": true, 00:08:30.026 "data_offset": 2048, 00:08:30.026 "data_size": 63488 00:08:30.026 } 00:08:30.026 ] 00:08:30.026 }' 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.026 01:52:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.284 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:30.284 01:52:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:30.543 [2024-12-07 01:52:35.788006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.482 "name": "raid_bdev1", 00:08:31.482 "uuid": "77ac217c-7ff5-4d9b-a2d9-4d3c6c2142db", 00:08:31.482 "strip_size_kb": 64, 00:08:31.482 "state": "online", 00:08:31.482 "raid_level": "concat", 00:08:31.482 "superblock": true, 00:08:31.482 "num_base_bdevs": 3, 00:08:31.482 "num_base_bdevs_discovered": 3, 00:08:31.482 "num_base_bdevs_operational": 3, 00:08:31.482 "base_bdevs_list": [ 00:08:31.482 { 00:08:31.482 "name": "BaseBdev1", 00:08:31.482 "uuid": "5dfa07ea-87e2-57c9-b4eb-1585bad39ef3", 00:08:31.482 "is_configured": true, 00:08:31.482 "data_offset": 2048, 00:08:31.482 "data_size": 63488 00:08:31.482 }, 00:08:31.482 { 00:08:31.482 "name": "BaseBdev2", 00:08:31.482 "uuid": "d6e6941f-36ed-5d2f-b137-a0b0ec1df53c", 00:08:31.482 "is_configured": true, 00:08:31.482 "data_offset": 2048, 00:08:31.482 "data_size": 63488 00:08:31.482 }, 00:08:31.482 { 00:08:31.482 "name": "BaseBdev3", 00:08:31.482 "uuid": "fa043562-ba2f-5fd7-9206-a87539e3204f", 00:08:31.482 "is_configured": true, 00:08:31.482 "data_offset": 2048, 00:08:31.482 "data_size": 63488 00:08:31.482 } 00:08:31.482 ] 00:08:31.482 }' 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.482 01:52:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.742 [2024-12-07 01:52:37.175500] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:31.742 [2024-12-07 01:52:37.175582] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.742 [2024-12-07 01:52:37.178148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.742 [2024-12-07 01:52:37.178204] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.742 [2024-12-07 01:52:37.178252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.742 [2024-12-07 01:52:37.178269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78057 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 78057 ']' 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 78057 00:08:31.742 { 00:08:31.742 "results": [ 00:08:31.742 { 00:08:31.742 "job": "raid_bdev1", 00:08:31.742 "core_mask": "0x1", 00:08:31.742 "workload": "randrw", 00:08:31.742 "percentage": 50, 00:08:31.742 "status": "finished", 00:08:31.742 "queue_depth": 1, 00:08:31.742 "io_size": 131072, 00:08:31.742 "runtime": 1.388449, 00:08:31.742 "iops": 17036.995957359613, 00:08:31.742 "mibps": 2129.6244946699517, 00:08:31.742 "io_failed": 1, 00:08:31.742 "io_timeout": 0, 00:08:31.742 "avg_latency_us": 81.31213359462336, 00:08:31.742 "min_latency_us": 24.593886462882097, 00:08:31.742 "max_latency_us": 1380.8349344978167 00:08:31.742 } 00:08:31.742 ], 00:08:31.742 "core_count": 1 00:08:31.742 } 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:31.742 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78057 00:08:32.002 killing process with pid 78057 00:08:32.002 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.002 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.002 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78057' 00:08:32.002 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 78057 00:08:32.002 [2024-12-07 01:52:37.221449] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.002 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 78057 00:08:32.002 [2024-12-07 01:52:37.246302] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9St2r2bqn1 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:32.263 ************************************ 00:08:32.263 END TEST raid_write_error_test 00:08:32.263 ************************************ 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:32.263 00:08:32.263 real 0m3.288s 00:08:32.263 user 0m4.152s 00:08:32.263 sys 0m0.541s 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.263 01:52:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 01:52:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:32.263 01:52:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:32.263 01:52:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:32.263 01:52:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.263 01:52:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 ************************************ 00:08:32.263 START TEST raid_state_function_test 00:08:32.263 ************************************ 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78189 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78189' 00:08:32.263 Process raid pid: 78189 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78189 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 78189 ']' 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:32.263 01:52:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.263 [2024-12-07 01:52:37.648097] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:32.263 [2024-12-07 01:52:37.648210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.523 [2024-12-07 01:52:37.778541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.523 [2024-12-07 01:52:37.822583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.523 [2024-12-07 01:52:37.863716] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.523 [2024-12-07 01:52:37.863752] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.093 [2024-12-07 01:52:38.484513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.093 [2024-12-07 01:52:38.484570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.093 [2024-12-07 01:52:38.484582] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.093 [2024-12-07 01:52:38.484593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.093 [2024-12-07 01:52:38.484599] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.093 [2024-12-07 01:52:38.484610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.093 "name": "Existed_Raid", 00:08:33.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.093 "strip_size_kb": 0, 00:08:33.093 "state": "configuring", 00:08:33.093 "raid_level": "raid1", 00:08:33.093 "superblock": false, 00:08:33.093 "num_base_bdevs": 3, 00:08:33.093 "num_base_bdevs_discovered": 0, 00:08:33.093 "num_base_bdevs_operational": 3, 00:08:33.093 "base_bdevs_list": [ 00:08:33.093 { 00:08:33.093 "name": "BaseBdev1", 00:08:33.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.093 "is_configured": false, 00:08:33.093 "data_offset": 0, 00:08:33.093 "data_size": 0 00:08:33.093 }, 00:08:33.093 { 00:08:33.093 "name": "BaseBdev2", 00:08:33.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.093 "is_configured": false, 00:08:33.093 "data_offset": 0, 00:08:33.093 "data_size": 0 00:08:33.093 }, 00:08:33.093 { 00:08:33.093 "name": "BaseBdev3", 00:08:33.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.093 "is_configured": false, 00:08:33.093 "data_offset": 0, 00:08:33.093 "data_size": 0 00:08:33.093 } 00:08:33.093 ] 00:08:33.093 }' 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.093 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 [2024-12-07 01:52:38.915807] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:33.663 [2024-12-07 01:52:38.915902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 [2024-12-07 01:52:38.923799] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:33.663 [2024-12-07 01:52:38.923876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:33.663 [2024-12-07 01:52:38.923902] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:33.663 [2024-12-07 01:52:38.923924] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:33.663 [2024-12-07 01:52:38.923942] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:33.663 [2024-12-07 01:52:38.923963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 [2024-12-07 01:52:38.940428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:33.663 BaseBdev1 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.663 [ 00:08:33.663 { 00:08:33.663 "name": "BaseBdev1", 00:08:33.663 "aliases": [ 00:08:33.663 "677cb206-07ce-4e8f-be6a-37b165e16a94" 00:08:33.663 ], 00:08:33.663 "product_name": "Malloc disk", 00:08:33.663 "block_size": 512, 00:08:33.663 "num_blocks": 65536, 00:08:33.663 "uuid": "677cb206-07ce-4e8f-be6a-37b165e16a94", 00:08:33.663 "assigned_rate_limits": { 00:08:33.663 "rw_ios_per_sec": 0, 00:08:33.663 "rw_mbytes_per_sec": 0, 00:08:33.663 "r_mbytes_per_sec": 0, 00:08:33.663 "w_mbytes_per_sec": 0 00:08:33.663 }, 00:08:33.663 "claimed": true, 00:08:33.663 "claim_type": "exclusive_write", 00:08:33.663 "zoned": false, 00:08:33.663 "supported_io_types": { 00:08:33.663 "read": true, 00:08:33.663 "write": true, 00:08:33.663 "unmap": true, 00:08:33.663 "flush": true, 00:08:33.663 "reset": true, 00:08:33.663 "nvme_admin": false, 00:08:33.663 "nvme_io": false, 00:08:33.663 "nvme_io_md": false, 00:08:33.663 "write_zeroes": true, 00:08:33.663 "zcopy": true, 00:08:33.663 "get_zone_info": false, 00:08:33.663 "zone_management": false, 00:08:33.663 "zone_append": false, 00:08:33.663 "compare": false, 00:08:33.663 "compare_and_write": false, 00:08:33.663 "abort": true, 00:08:33.663 "seek_hole": false, 00:08:33.663 "seek_data": false, 00:08:33.663 "copy": true, 00:08:33.663 "nvme_iov_md": false 00:08:33.663 }, 00:08:33.663 "memory_domains": [ 00:08:33.663 { 00:08:33.663 "dma_device_id": "system", 00:08:33.663 "dma_device_type": 1 00:08:33.663 }, 00:08:33.663 { 00:08:33.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.663 "dma_device_type": 2 00:08:33.663 } 00:08:33.663 ], 00:08:33.663 "driver_specific": {} 00:08:33.663 } 00:08:33.663 ] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.663 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.664 01:52:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.664 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.664 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.664 "name": "Existed_Raid", 00:08:33.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.664 "strip_size_kb": 0, 00:08:33.664 "state": "configuring", 00:08:33.664 "raid_level": "raid1", 00:08:33.664 "superblock": false, 00:08:33.664 "num_base_bdevs": 3, 00:08:33.664 "num_base_bdevs_discovered": 1, 00:08:33.664 "num_base_bdevs_operational": 3, 00:08:33.664 "base_bdevs_list": [ 00:08:33.664 { 00:08:33.664 "name": "BaseBdev1", 00:08:33.664 "uuid": "677cb206-07ce-4e8f-be6a-37b165e16a94", 00:08:33.664 "is_configured": true, 00:08:33.664 "data_offset": 0, 00:08:33.664 "data_size": 65536 00:08:33.664 }, 00:08:33.664 { 00:08:33.664 "name": "BaseBdev2", 00:08:33.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.664 "is_configured": false, 00:08:33.664 "data_offset": 0, 00:08:33.664 "data_size": 0 00:08:33.664 }, 00:08:33.664 { 00:08:33.664 "name": "BaseBdev3", 00:08:33.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.664 "is_configured": false, 00:08:33.664 "data_offset": 0, 00:08:33.664 "data_size": 0 00:08:33.664 } 00:08:33.664 ] 00:08:33.664 }' 00:08:33.664 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.664 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.236 [2024-12-07 01:52:39.411701] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:34.236 [2024-12-07 01:52:39.411753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.236 [2024-12-07 01:52:39.419716] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:34.236 [2024-12-07 01:52:39.421493] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.236 [2024-12-07 01:52:39.421542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.236 [2024-12-07 01:52:39.421551] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.236 [2024-12-07 01:52:39.421561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.236 "name": "Existed_Raid", 00:08:34.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.236 "strip_size_kb": 0, 00:08:34.236 "state": "configuring", 00:08:34.236 "raid_level": "raid1", 00:08:34.236 "superblock": false, 00:08:34.236 "num_base_bdevs": 3, 00:08:34.236 "num_base_bdevs_discovered": 1, 00:08:34.236 "num_base_bdevs_operational": 3, 00:08:34.236 "base_bdevs_list": [ 00:08:34.236 { 00:08:34.236 "name": "BaseBdev1", 00:08:34.236 "uuid": "677cb206-07ce-4e8f-be6a-37b165e16a94", 00:08:34.236 "is_configured": true, 00:08:34.236 "data_offset": 0, 00:08:34.236 "data_size": 65536 00:08:34.236 }, 00:08:34.236 { 00:08:34.236 "name": "BaseBdev2", 00:08:34.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.236 "is_configured": false, 00:08:34.236 "data_offset": 0, 00:08:34.236 "data_size": 0 00:08:34.236 }, 00:08:34.236 { 00:08:34.236 "name": "BaseBdev3", 00:08:34.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.236 "is_configured": false, 00:08:34.236 "data_offset": 0, 00:08:34.236 "data_size": 0 00:08:34.236 } 00:08:34.236 ] 00:08:34.236 }' 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.236 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 [2024-12-07 01:52:39.915208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:34.497 BaseBdev2 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.497 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.497 [ 00:08:34.497 { 00:08:34.497 "name": "BaseBdev2", 00:08:34.497 "aliases": [ 00:08:34.497 "f37a8d33-e289-4eca-8694-a88e16a044b2" 00:08:34.497 ], 00:08:34.497 "product_name": "Malloc disk", 00:08:34.497 "block_size": 512, 00:08:34.497 "num_blocks": 65536, 00:08:34.497 "uuid": "f37a8d33-e289-4eca-8694-a88e16a044b2", 00:08:34.497 "assigned_rate_limits": { 00:08:34.497 "rw_ios_per_sec": 0, 00:08:34.497 "rw_mbytes_per_sec": 0, 00:08:34.497 "r_mbytes_per_sec": 0, 00:08:34.497 "w_mbytes_per_sec": 0 00:08:34.497 }, 00:08:34.497 "claimed": true, 00:08:34.497 "claim_type": "exclusive_write", 00:08:34.497 "zoned": false, 00:08:34.497 "supported_io_types": { 00:08:34.497 "read": true, 00:08:34.497 "write": true, 00:08:34.497 "unmap": true, 00:08:34.497 "flush": true, 00:08:34.497 "reset": true, 00:08:34.497 "nvme_admin": false, 00:08:34.497 "nvme_io": false, 00:08:34.497 "nvme_io_md": false, 00:08:34.497 "write_zeroes": true, 00:08:34.497 "zcopy": true, 00:08:34.497 "get_zone_info": false, 00:08:34.497 "zone_management": false, 00:08:34.497 "zone_append": false, 00:08:34.497 "compare": false, 00:08:34.497 "compare_and_write": false, 00:08:34.497 "abort": true, 00:08:34.497 "seek_hole": false, 00:08:34.497 "seek_data": false, 00:08:34.497 "copy": true, 00:08:34.497 "nvme_iov_md": false 00:08:34.497 }, 00:08:34.497 "memory_domains": [ 00:08:34.497 { 00:08:34.497 "dma_device_id": "system", 00:08:34.497 "dma_device_type": 1 00:08:34.497 }, 00:08:34.497 { 00:08:34.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.497 "dma_device_type": 2 00:08:34.497 } 00:08:34.497 ], 00:08:34.757 "driver_specific": {} 00:08:34.757 } 00:08:34.757 ] 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.757 01:52:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.757 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.757 "name": "Existed_Raid", 00:08:34.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.757 "strip_size_kb": 0, 00:08:34.757 "state": "configuring", 00:08:34.757 "raid_level": "raid1", 00:08:34.757 "superblock": false, 00:08:34.757 "num_base_bdevs": 3, 00:08:34.757 "num_base_bdevs_discovered": 2, 00:08:34.757 "num_base_bdevs_operational": 3, 00:08:34.757 "base_bdevs_list": [ 00:08:34.757 { 00:08:34.757 "name": "BaseBdev1", 00:08:34.757 "uuid": "677cb206-07ce-4e8f-be6a-37b165e16a94", 00:08:34.757 "is_configured": true, 00:08:34.757 "data_offset": 0, 00:08:34.757 "data_size": 65536 00:08:34.757 }, 00:08:34.757 { 00:08:34.757 "name": "BaseBdev2", 00:08:34.757 "uuid": "f37a8d33-e289-4eca-8694-a88e16a044b2", 00:08:34.757 "is_configured": true, 00:08:34.757 "data_offset": 0, 00:08:34.757 "data_size": 65536 00:08:34.757 }, 00:08:34.757 { 00:08:34.757 "name": "BaseBdev3", 00:08:34.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.757 "is_configured": false, 00:08:34.757 "data_offset": 0, 00:08:34.757 "data_size": 0 00:08:34.757 } 00:08:34.757 ] 00:08:34.757 }' 00:08:34.757 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.757 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.017 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.017 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.018 [2024-12-07 01:52:40.385280] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.018 [2024-12-07 01:52:40.385324] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:35.018 [2024-12-07 01:52:40.385334] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:35.018 [2024-12-07 01:52:40.385584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:35.018 [2024-12-07 01:52:40.385737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:35.018 [2024-12-07 01:52:40.385764] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:35.018 [2024-12-07 01:52:40.385957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.018 BaseBdev3 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.018 [ 00:08:35.018 { 00:08:35.018 "name": "BaseBdev3", 00:08:35.018 "aliases": [ 00:08:35.018 "dbc50e1f-49ac-485b-856f-e03a2e76f1a6" 00:08:35.018 ], 00:08:35.018 "product_name": "Malloc disk", 00:08:35.018 "block_size": 512, 00:08:35.018 "num_blocks": 65536, 00:08:35.018 "uuid": "dbc50e1f-49ac-485b-856f-e03a2e76f1a6", 00:08:35.018 "assigned_rate_limits": { 00:08:35.018 "rw_ios_per_sec": 0, 00:08:35.018 "rw_mbytes_per_sec": 0, 00:08:35.018 "r_mbytes_per_sec": 0, 00:08:35.018 "w_mbytes_per_sec": 0 00:08:35.018 }, 00:08:35.018 "claimed": true, 00:08:35.018 "claim_type": "exclusive_write", 00:08:35.018 "zoned": false, 00:08:35.018 "supported_io_types": { 00:08:35.018 "read": true, 00:08:35.018 "write": true, 00:08:35.018 "unmap": true, 00:08:35.018 "flush": true, 00:08:35.018 "reset": true, 00:08:35.018 "nvme_admin": false, 00:08:35.018 "nvme_io": false, 00:08:35.018 "nvme_io_md": false, 00:08:35.018 "write_zeroes": true, 00:08:35.018 "zcopy": true, 00:08:35.018 "get_zone_info": false, 00:08:35.018 "zone_management": false, 00:08:35.018 "zone_append": false, 00:08:35.018 "compare": false, 00:08:35.018 "compare_and_write": false, 00:08:35.018 "abort": true, 00:08:35.018 "seek_hole": false, 00:08:35.018 "seek_data": false, 00:08:35.018 "copy": true, 00:08:35.018 "nvme_iov_md": false 00:08:35.018 }, 00:08:35.018 "memory_domains": [ 00:08:35.018 { 00:08:35.018 "dma_device_id": "system", 00:08:35.018 "dma_device_type": 1 00:08:35.018 }, 00:08:35.018 { 00:08:35.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.018 "dma_device_type": 2 00:08:35.018 } 00:08:35.018 ], 00:08:35.018 "driver_specific": {} 00:08:35.018 } 00:08:35.018 ] 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.018 "name": "Existed_Raid", 00:08:35.018 "uuid": "ae0f27fe-3c38-45cc-a9a1-5341ca49d999", 00:08:35.018 "strip_size_kb": 0, 00:08:35.018 "state": "online", 00:08:35.018 "raid_level": "raid1", 00:08:35.018 "superblock": false, 00:08:35.018 "num_base_bdevs": 3, 00:08:35.018 "num_base_bdevs_discovered": 3, 00:08:35.018 "num_base_bdevs_operational": 3, 00:08:35.018 "base_bdevs_list": [ 00:08:35.018 { 00:08:35.018 "name": "BaseBdev1", 00:08:35.018 "uuid": "677cb206-07ce-4e8f-be6a-37b165e16a94", 00:08:35.018 "is_configured": true, 00:08:35.018 "data_offset": 0, 00:08:35.018 "data_size": 65536 00:08:35.018 }, 00:08:35.018 { 00:08:35.018 "name": "BaseBdev2", 00:08:35.018 "uuid": "f37a8d33-e289-4eca-8694-a88e16a044b2", 00:08:35.018 "is_configured": true, 00:08:35.018 "data_offset": 0, 00:08:35.018 "data_size": 65536 00:08:35.018 }, 00:08:35.018 { 00:08:35.018 "name": "BaseBdev3", 00:08:35.018 "uuid": "dbc50e1f-49ac-485b-856f-e03a2e76f1a6", 00:08:35.018 "is_configured": true, 00:08:35.018 "data_offset": 0, 00:08:35.018 "data_size": 65536 00:08:35.018 } 00:08:35.018 ] 00:08:35.018 }' 00:08:35.018 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.019 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.589 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.590 [2024-12-07 01:52:40.852880] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.590 "name": "Existed_Raid", 00:08:35.590 "aliases": [ 00:08:35.590 "ae0f27fe-3c38-45cc-a9a1-5341ca49d999" 00:08:35.590 ], 00:08:35.590 "product_name": "Raid Volume", 00:08:35.590 "block_size": 512, 00:08:35.590 "num_blocks": 65536, 00:08:35.590 "uuid": "ae0f27fe-3c38-45cc-a9a1-5341ca49d999", 00:08:35.590 "assigned_rate_limits": { 00:08:35.590 "rw_ios_per_sec": 0, 00:08:35.590 "rw_mbytes_per_sec": 0, 00:08:35.590 "r_mbytes_per_sec": 0, 00:08:35.590 "w_mbytes_per_sec": 0 00:08:35.590 }, 00:08:35.590 "claimed": false, 00:08:35.590 "zoned": false, 00:08:35.590 "supported_io_types": { 00:08:35.590 "read": true, 00:08:35.590 "write": true, 00:08:35.590 "unmap": false, 00:08:35.590 "flush": false, 00:08:35.590 "reset": true, 00:08:35.590 "nvme_admin": false, 00:08:35.590 "nvme_io": false, 00:08:35.590 "nvme_io_md": false, 00:08:35.590 "write_zeroes": true, 00:08:35.590 "zcopy": false, 00:08:35.590 "get_zone_info": false, 00:08:35.590 "zone_management": false, 00:08:35.590 "zone_append": false, 00:08:35.590 "compare": false, 00:08:35.590 "compare_and_write": false, 00:08:35.590 "abort": false, 00:08:35.590 "seek_hole": false, 00:08:35.590 "seek_data": false, 00:08:35.590 "copy": false, 00:08:35.590 "nvme_iov_md": false 00:08:35.590 }, 00:08:35.590 "memory_domains": [ 00:08:35.590 { 00:08:35.590 "dma_device_id": "system", 00:08:35.590 "dma_device_type": 1 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.590 "dma_device_type": 2 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "dma_device_id": "system", 00:08:35.590 "dma_device_type": 1 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.590 "dma_device_type": 2 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "dma_device_id": "system", 00:08:35.590 "dma_device_type": 1 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.590 "dma_device_type": 2 00:08:35.590 } 00:08:35.590 ], 00:08:35.590 "driver_specific": { 00:08:35.590 "raid": { 00:08:35.590 "uuid": "ae0f27fe-3c38-45cc-a9a1-5341ca49d999", 00:08:35.590 "strip_size_kb": 0, 00:08:35.590 "state": "online", 00:08:35.590 "raid_level": "raid1", 00:08:35.590 "superblock": false, 00:08:35.590 "num_base_bdevs": 3, 00:08:35.590 "num_base_bdevs_discovered": 3, 00:08:35.590 "num_base_bdevs_operational": 3, 00:08:35.590 "base_bdevs_list": [ 00:08:35.590 { 00:08:35.590 "name": "BaseBdev1", 00:08:35.590 "uuid": "677cb206-07ce-4e8f-be6a-37b165e16a94", 00:08:35.590 "is_configured": true, 00:08:35.590 "data_offset": 0, 00:08:35.590 "data_size": 65536 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "name": "BaseBdev2", 00:08:35.590 "uuid": "f37a8d33-e289-4eca-8694-a88e16a044b2", 00:08:35.590 "is_configured": true, 00:08:35.590 "data_offset": 0, 00:08:35.590 "data_size": 65536 00:08:35.590 }, 00:08:35.590 { 00:08:35.590 "name": "BaseBdev3", 00:08:35.590 "uuid": "dbc50e1f-49ac-485b-856f-e03a2e76f1a6", 00:08:35.590 "is_configured": true, 00:08:35.590 "data_offset": 0, 00:08:35.590 "data_size": 65536 00:08:35.590 } 00:08:35.590 ] 00:08:35.590 } 00:08:35.590 } 00:08:35.590 }' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:35.590 BaseBdev2 00:08:35.590 BaseBdev3' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:35.590 01:52:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.590 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.851 [2024-12-07 01:52:41.080216] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.851 "name": "Existed_Raid", 00:08:35.851 "uuid": "ae0f27fe-3c38-45cc-a9a1-5341ca49d999", 00:08:35.851 "strip_size_kb": 0, 00:08:35.851 "state": "online", 00:08:35.851 "raid_level": "raid1", 00:08:35.851 "superblock": false, 00:08:35.851 "num_base_bdevs": 3, 00:08:35.851 "num_base_bdevs_discovered": 2, 00:08:35.851 "num_base_bdevs_operational": 2, 00:08:35.851 "base_bdevs_list": [ 00:08:35.851 { 00:08:35.851 "name": null, 00:08:35.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.851 "is_configured": false, 00:08:35.851 "data_offset": 0, 00:08:35.851 "data_size": 65536 00:08:35.851 }, 00:08:35.851 { 00:08:35.851 "name": "BaseBdev2", 00:08:35.851 "uuid": "f37a8d33-e289-4eca-8694-a88e16a044b2", 00:08:35.851 "is_configured": true, 00:08:35.851 "data_offset": 0, 00:08:35.851 "data_size": 65536 00:08:35.851 }, 00:08:35.851 { 00:08:35.851 "name": "BaseBdev3", 00:08:35.851 "uuid": "dbc50e1f-49ac-485b-856f-e03a2e76f1a6", 00:08:35.851 "is_configured": true, 00:08:35.851 "data_offset": 0, 00:08:35.851 "data_size": 65536 00:08:35.851 } 00:08:35.851 ] 00:08:35.851 }' 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.851 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.112 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.372 [2024-12-07 01:52:41.594300] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.372 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.372 [2024-12-07 01:52:41.645455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.372 [2024-12-07 01:52:41.645544] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:36.372 [2024-12-07 01:52:41.657087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:36.373 [2024-12-07 01:52:41.657202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:36.373 [2024-12-07 01:52:41.657248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 BaseBdev2 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 [ 00:08:36.373 { 00:08:36.373 "name": "BaseBdev2", 00:08:36.373 "aliases": [ 00:08:36.373 "1a8d4995-57d7-4c0c-9177-7061731d8ca0" 00:08:36.373 ], 00:08:36.373 "product_name": "Malloc disk", 00:08:36.373 "block_size": 512, 00:08:36.373 "num_blocks": 65536, 00:08:36.373 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:36.373 "assigned_rate_limits": { 00:08:36.373 "rw_ios_per_sec": 0, 00:08:36.373 "rw_mbytes_per_sec": 0, 00:08:36.373 "r_mbytes_per_sec": 0, 00:08:36.373 "w_mbytes_per_sec": 0 00:08:36.373 }, 00:08:36.373 "claimed": false, 00:08:36.373 "zoned": false, 00:08:36.373 "supported_io_types": { 00:08:36.373 "read": true, 00:08:36.373 "write": true, 00:08:36.373 "unmap": true, 00:08:36.373 "flush": true, 00:08:36.373 "reset": true, 00:08:36.373 "nvme_admin": false, 00:08:36.373 "nvme_io": false, 00:08:36.373 "nvme_io_md": false, 00:08:36.373 "write_zeroes": true, 00:08:36.373 "zcopy": true, 00:08:36.373 "get_zone_info": false, 00:08:36.373 "zone_management": false, 00:08:36.373 "zone_append": false, 00:08:36.373 "compare": false, 00:08:36.373 "compare_and_write": false, 00:08:36.373 "abort": true, 00:08:36.373 "seek_hole": false, 00:08:36.373 "seek_data": false, 00:08:36.373 "copy": true, 00:08:36.373 "nvme_iov_md": false 00:08:36.373 }, 00:08:36.373 "memory_domains": [ 00:08:36.373 { 00:08:36.373 "dma_device_id": "system", 00:08:36.373 "dma_device_type": 1 00:08:36.373 }, 00:08:36.373 { 00:08:36.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.373 "dma_device_type": 2 00:08:36.373 } 00:08:36.373 ], 00:08:36.373 "driver_specific": {} 00:08:36.373 } 00:08:36.373 ] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 BaseBdev3 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 [ 00:08:36.373 { 00:08:36.373 "name": "BaseBdev3", 00:08:36.373 "aliases": [ 00:08:36.373 "46d3c218-49da-4567-9920-b9157b978ede" 00:08:36.373 ], 00:08:36.373 "product_name": "Malloc disk", 00:08:36.373 "block_size": 512, 00:08:36.373 "num_blocks": 65536, 00:08:36.373 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:36.373 "assigned_rate_limits": { 00:08:36.373 "rw_ios_per_sec": 0, 00:08:36.373 "rw_mbytes_per_sec": 0, 00:08:36.373 "r_mbytes_per_sec": 0, 00:08:36.373 "w_mbytes_per_sec": 0 00:08:36.373 }, 00:08:36.373 "claimed": false, 00:08:36.373 "zoned": false, 00:08:36.373 "supported_io_types": { 00:08:36.373 "read": true, 00:08:36.373 "write": true, 00:08:36.373 "unmap": true, 00:08:36.373 "flush": true, 00:08:36.373 "reset": true, 00:08:36.373 "nvme_admin": false, 00:08:36.373 "nvme_io": false, 00:08:36.373 "nvme_io_md": false, 00:08:36.373 "write_zeroes": true, 00:08:36.373 "zcopy": true, 00:08:36.373 "get_zone_info": false, 00:08:36.373 "zone_management": false, 00:08:36.373 "zone_append": false, 00:08:36.373 "compare": false, 00:08:36.373 "compare_and_write": false, 00:08:36.373 "abort": true, 00:08:36.373 "seek_hole": false, 00:08:36.373 "seek_data": false, 00:08:36.373 "copy": true, 00:08:36.373 "nvme_iov_md": false 00:08:36.373 }, 00:08:36.373 "memory_domains": [ 00:08:36.373 { 00:08:36.373 "dma_device_id": "system", 00:08:36.373 "dma_device_type": 1 00:08:36.373 }, 00:08:36.373 { 00:08:36.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.373 "dma_device_type": 2 00:08:36.373 } 00:08:36.373 ], 00:08:36.373 "driver_specific": {} 00:08:36.373 } 00:08:36.373 ] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.373 [2024-12-07 01:52:41.821507] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.373 [2024-12-07 01:52:41.821554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.373 [2024-12-07 01:52:41.821574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.373 [2024-12-07 01:52:41.823316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.373 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.374 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.374 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.374 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.374 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.634 "name": "Existed_Raid", 00:08:36.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.634 "strip_size_kb": 0, 00:08:36.634 "state": "configuring", 00:08:36.634 "raid_level": "raid1", 00:08:36.634 "superblock": false, 00:08:36.634 "num_base_bdevs": 3, 00:08:36.634 "num_base_bdevs_discovered": 2, 00:08:36.634 "num_base_bdevs_operational": 3, 00:08:36.634 "base_bdevs_list": [ 00:08:36.634 { 00:08:36.634 "name": "BaseBdev1", 00:08:36.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.634 "is_configured": false, 00:08:36.634 "data_offset": 0, 00:08:36.634 "data_size": 0 00:08:36.634 }, 00:08:36.634 { 00:08:36.634 "name": "BaseBdev2", 00:08:36.634 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:36.634 "is_configured": true, 00:08:36.634 "data_offset": 0, 00:08:36.634 "data_size": 65536 00:08:36.634 }, 00:08:36.634 { 00:08:36.634 "name": "BaseBdev3", 00:08:36.634 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:36.634 "is_configured": true, 00:08:36.634 "data_offset": 0, 00:08:36.634 "data_size": 65536 00:08:36.634 } 00:08:36.634 ] 00:08:36.634 }' 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.634 01:52:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.893 [2024-12-07 01:52:42.272728] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.893 "name": "Existed_Raid", 00:08:36.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.893 "strip_size_kb": 0, 00:08:36.893 "state": "configuring", 00:08:36.893 "raid_level": "raid1", 00:08:36.893 "superblock": false, 00:08:36.893 "num_base_bdevs": 3, 00:08:36.893 "num_base_bdevs_discovered": 1, 00:08:36.893 "num_base_bdevs_operational": 3, 00:08:36.893 "base_bdevs_list": [ 00:08:36.893 { 00:08:36.893 "name": "BaseBdev1", 00:08:36.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.893 "is_configured": false, 00:08:36.893 "data_offset": 0, 00:08:36.893 "data_size": 0 00:08:36.893 }, 00:08:36.893 { 00:08:36.893 "name": null, 00:08:36.893 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:36.893 "is_configured": false, 00:08:36.893 "data_offset": 0, 00:08:36.893 "data_size": 65536 00:08:36.893 }, 00:08:36.893 { 00:08:36.893 "name": "BaseBdev3", 00:08:36.893 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:36.893 "is_configured": true, 00:08:36.893 "data_offset": 0, 00:08:36.893 "data_size": 65536 00:08:36.893 } 00:08:36.893 ] 00:08:36.893 }' 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.893 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.462 [2024-12-07 01:52:42.790600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.462 BaseBdev1 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.462 [ 00:08:37.462 { 00:08:37.462 "name": "BaseBdev1", 00:08:37.462 "aliases": [ 00:08:37.462 "2a8b4348-462a-407c-add5-d6d903c4c885" 00:08:37.462 ], 00:08:37.462 "product_name": "Malloc disk", 00:08:37.462 "block_size": 512, 00:08:37.462 "num_blocks": 65536, 00:08:37.462 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:37.462 "assigned_rate_limits": { 00:08:37.462 "rw_ios_per_sec": 0, 00:08:37.462 "rw_mbytes_per_sec": 0, 00:08:37.462 "r_mbytes_per_sec": 0, 00:08:37.462 "w_mbytes_per_sec": 0 00:08:37.462 }, 00:08:37.462 "claimed": true, 00:08:37.462 "claim_type": "exclusive_write", 00:08:37.462 "zoned": false, 00:08:37.462 "supported_io_types": { 00:08:37.462 "read": true, 00:08:37.462 "write": true, 00:08:37.462 "unmap": true, 00:08:37.462 "flush": true, 00:08:37.462 "reset": true, 00:08:37.462 "nvme_admin": false, 00:08:37.462 "nvme_io": false, 00:08:37.462 "nvme_io_md": false, 00:08:37.462 "write_zeroes": true, 00:08:37.462 "zcopy": true, 00:08:37.462 "get_zone_info": false, 00:08:37.462 "zone_management": false, 00:08:37.462 "zone_append": false, 00:08:37.462 "compare": false, 00:08:37.462 "compare_and_write": false, 00:08:37.462 "abort": true, 00:08:37.462 "seek_hole": false, 00:08:37.462 "seek_data": false, 00:08:37.462 "copy": true, 00:08:37.462 "nvme_iov_md": false 00:08:37.462 }, 00:08:37.462 "memory_domains": [ 00:08:37.462 { 00:08:37.462 "dma_device_id": "system", 00:08:37.462 "dma_device_type": 1 00:08:37.462 }, 00:08:37.462 { 00:08:37.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.462 "dma_device_type": 2 00:08:37.462 } 00:08:37.462 ], 00:08:37.462 "driver_specific": {} 00:08:37.462 } 00:08:37.462 ] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.462 "name": "Existed_Raid", 00:08:37.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.462 "strip_size_kb": 0, 00:08:37.462 "state": "configuring", 00:08:37.462 "raid_level": "raid1", 00:08:37.462 "superblock": false, 00:08:37.462 "num_base_bdevs": 3, 00:08:37.462 "num_base_bdevs_discovered": 2, 00:08:37.462 "num_base_bdevs_operational": 3, 00:08:37.462 "base_bdevs_list": [ 00:08:37.462 { 00:08:37.462 "name": "BaseBdev1", 00:08:37.462 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:37.462 "is_configured": true, 00:08:37.462 "data_offset": 0, 00:08:37.462 "data_size": 65536 00:08:37.462 }, 00:08:37.462 { 00:08:37.462 "name": null, 00:08:37.462 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:37.462 "is_configured": false, 00:08:37.462 "data_offset": 0, 00:08:37.462 "data_size": 65536 00:08:37.462 }, 00:08:37.462 { 00:08:37.462 "name": "BaseBdev3", 00:08:37.462 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:37.462 "is_configured": true, 00:08:37.462 "data_offset": 0, 00:08:37.462 "data_size": 65536 00:08:37.462 } 00:08:37.462 ] 00:08:37.462 }' 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.462 01:52:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:38.030 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.031 [2024-12-07 01:52:43.317772] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.031 "name": "Existed_Raid", 00:08:38.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.031 "strip_size_kb": 0, 00:08:38.031 "state": "configuring", 00:08:38.031 "raid_level": "raid1", 00:08:38.031 "superblock": false, 00:08:38.031 "num_base_bdevs": 3, 00:08:38.031 "num_base_bdevs_discovered": 1, 00:08:38.031 "num_base_bdevs_operational": 3, 00:08:38.031 "base_bdevs_list": [ 00:08:38.031 { 00:08:38.031 "name": "BaseBdev1", 00:08:38.031 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:38.031 "is_configured": true, 00:08:38.031 "data_offset": 0, 00:08:38.031 "data_size": 65536 00:08:38.031 }, 00:08:38.031 { 00:08:38.031 "name": null, 00:08:38.031 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:38.031 "is_configured": false, 00:08:38.031 "data_offset": 0, 00:08:38.031 "data_size": 65536 00:08:38.031 }, 00:08:38.031 { 00:08:38.031 "name": null, 00:08:38.031 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:38.031 "is_configured": false, 00:08:38.031 "data_offset": 0, 00:08:38.031 "data_size": 65536 00:08:38.031 } 00:08:38.031 ] 00:08:38.031 }' 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.031 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 [2024-12-07 01:52:43.804906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.599 "name": "Existed_Raid", 00:08:38.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.599 "strip_size_kb": 0, 00:08:38.599 "state": "configuring", 00:08:38.599 "raid_level": "raid1", 00:08:38.599 "superblock": false, 00:08:38.599 "num_base_bdevs": 3, 00:08:38.599 "num_base_bdevs_discovered": 2, 00:08:38.599 "num_base_bdevs_operational": 3, 00:08:38.599 "base_bdevs_list": [ 00:08:38.599 { 00:08:38.599 "name": "BaseBdev1", 00:08:38.599 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:38.599 "is_configured": true, 00:08:38.599 "data_offset": 0, 00:08:38.599 "data_size": 65536 00:08:38.599 }, 00:08:38.599 { 00:08:38.599 "name": null, 00:08:38.599 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:38.599 "is_configured": false, 00:08:38.599 "data_offset": 0, 00:08:38.599 "data_size": 65536 00:08:38.599 }, 00:08:38.599 { 00:08:38.599 "name": "BaseBdev3", 00:08:38.599 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:38.599 "is_configured": true, 00:08:38.599 "data_offset": 0, 00:08:38.599 "data_size": 65536 00:08:38.599 } 00:08:38.599 ] 00:08:38.599 }' 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.599 01:52:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 [2024-12-07 01:52:44.224215] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.858 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.858 "name": "Existed_Raid", 00:08:38.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.858 "strip_size_kb": 0, 00:08:38.858 "state": "configuring", 00:08:38.858 "raid_level": "raid1", 00:08:38.858 "superblock": false, 00:08:38.858 "num_base_bdevs": 3, 00:08:38.858 "num_base_bdevs_discovered": 1, 00:08:38.858 "num_base_bdevs_operational": 3, 00:08:38.858 "base_bdevs_list": [ 00:08:38.858 { 00:08:38.858 "name": null, 00:08:38.858 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:38.858 "is_configured": false, 00:08:38.858 "data_offset": 0, 00:08:38.858 "data_size": 65536 00:08:38.858 }, 00:08:38.858 { 00:08:38.858 "name": null, 00:08:38.858 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:38.858 "is_configured": false, 00:08:38.858 "data_offset": 0, 00:08:38.858 "data_size": 65536 00:08:38.858 }, 00:08:38.858 { 00:08:38.858 "name": "BaseBdev3", 00:08:38.859 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:38.859 "is_configured": true, 00:08:38.859 "data_offset": 0, 00:08:38.859 "data_size": 65536 00:08:38.859 } 00:08:38.859 ] 00:08:38.859 }' 00:08:38.859 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.859 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 [2024-12-07 01:52:44.725815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.444 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.444 "name": "Existed_Raid", 00:08:39.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.444 "strip_size_kb": 0, 00:08:39.444 "state": "configuring", 00:08:39.444 "raid_level": "raid1", 00:08:39.444 "superblock": false, 00:08:39.444 "num_base_bdevs": 3, 00:08:39.444 "num_base_bdevs_discovered": 2, 00:08:39.444 "num_base_bdevs_operational": 3, 00:08:39.444 "base_bdevs_list": [ 00:08:39.444 { 00:08:39.444 "name": null, 00:08:39.444 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:39.444 "is_configured": false, 00:08:39.444 "data_offset": 0, 00:08:39.444 "data_size": 65536 00:08:39.444 }, 00:08:39.444 { 00:08:39.444 "name": "BaseBdev2", 00:08:39.444 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:39.444 "is_configured": true, 00:08:39.444 "data_offset": 0, 00:08:39.444 "data_size": 65536 00:08:39.444 }, 00:08:39.444 { 00:08:39.444 "name": "BaseBdev3", 00:08:39.444 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:39.444 "is_configured": true, 00:08:39.444 "data_offset": 0, 00:08:39.444 "data_size": 65536 00:08:39.444 } 00:08:39.444 ] 00:08:39.444 }' 00:08:39.445 01:52:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.445 01:52:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.704 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.704 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.704 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2a8b4348-462a-407c-add5-d6d903c4c885 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.963 [2024-12-07 01:52:45.223793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:39.963 [2024-12-07 01:52:45.223838] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:39.963 [2024-12-07 01:52:45.223846] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:39.963 [2024-12-07 01:52:45.224087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:39.963 [2024-12-07 01:52:45.224214] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:39.963 [2024-12-07 01:52:45.224226] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:39.963 [2024-12-07 01:52:45.224403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.963 NewBaseBdev 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.963 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.963 [ 00:08:39.963 { 00:08:39.963 "name": "NewBaseBdev", 00:08:39.963 "aliases": [ 00:08:39.963 "2a8b4348-462a-407c-add5-d6d903c4c885" 00:08:39.963 ], 00:08:39.963 "product_name": "Malloc disk", 00:08:39.963 "block_size": 512, 00:08:39.963 "num_blocks": 65536, 00:08:39.963 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:39.963 "assigned_rate_limits": { 00:08:39.963 "rw_ios_per_sec": 0, 00:08:39.963 "rw_mbytes_per_sec": 0, 00:08:39.963 "r_mbytes_per_sec": 0, 00:08:39.963 "w_mbytes_per_sec": 0 00:08:39.963 }, 00:08:39.963 "claimed": true, 00:08:39.963 "claim_type": "exclusive_write", 00:08:39.963 "zoned": false, 00:08:39.963 "supported_io_types": { 00:08:39.963 "read": true, 00:08:39.963 "write": true, 00:08:39.963 "unmap": true, 00:08:39.963 "flush": true, 00:08:39.963 "reset": true, 00:08:39.963 "nvme_admin": false, 00:08:39.963 "nvme_io": false, 00:08:39.963 "nvme_io_md": false, 00:08:39.963 "write_zeroes": true, 00:08:39.963 "zcopy": true, 00:08:39.963 "get_zone_info": false, 00:08:39.963 "zone_management": false, 00:08:39.963 "zone_append": false, 00:08:39.963 "compare": false, 00:08:39.963 "compare_and_write": false, 00:08:39.963 "abort": true, 00:08:39.963 "seek_hole": false, 00:08:39.963 "seek_data": false, 00:08:39.963 "copy": true, 00:08:39.963 "nvme_iov_md": false 00:08:39.963 }, 00:08:39.963 "memory_domains": [ 00:08:39.964 { 00:08:39.964 "dma_device_id": "system", 00:08:39.964 "dma_device_type": 1 00:08:39.964 }, 00:08:39.964 { 00:08:39.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.964 "dma_device_type": 2 00:08:39.964 } 00:08:39.964 ], 00:08:39.964 "driver_specific": {} 00:08:39.964 } 00:08:39.964 ] 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.964 "name": "Existed_Raid", 00:08:39.964 "uuid": "64623a2a-5c79-4f73-8d08-6f9ead390507", 00:08:39.964 "strip_size_kb": 0, 00:08:39.964 "state": "online", 00:08:39.964 "raid_level": "raid1", 00:08:39.964 "superblock": false, 00:08:39.964 "num_base_bdevs": 3, 00:08:39.964 "num_base_bdevs_discovered": 3, 00:08:39.964 "num_base_bdevs_operational": 3, 00:08:39.964 "base_bdevs_list": [ 00:08:39.964 { 00:08:39.964 "name": "NewBaseBdev", 00:08:39.964 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:39.964 "is_configured": true, 00:08:39.964 "data_offset": 0, 00:08:39.964 "data_size": 65536 00:08:39.964 }, 00:08:39.964 { 00:08:39.964 "name": "BaseBdev2", 00:08:39.964 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:39.964 "is_configured": true, 00:08:39.964 "data_offset": 0, 00:08:39.964 "data_size": 65536 00:08:39.964 }, 00:08:39.964 { 00:08:39.964 "name": "BaseBdev3", 00:08:39.964 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:39.964 "is_configured": true, 00:08:39.964 "data_offset": 0, 00:08:39.964 "data_size": 65536 00:08:39.964 } 00:08:39.964 ] 00:08:39.964 }' 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.964 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.532 [2024-12-07 01:52:45.695394] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.532 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:40.532 "name": "Existed_Raid", 00:08:40.532 "aliases": [ 00:08:40.532 "64623a2a-5c79-4f73-8d08-6f9ead390507" 00:08:40.532 ], 00:08:40.532 "product_name": "Raid Volume", 00:08:40.532 "block_size": 512, 00:08:40.532 "num_blocks": 65536, 00:08:40.532 "uuid": "64623a2a-5c79-4f73-8d08-6f9ead390507", 00:08:40.532 "assigned_rate_limits": { 00:08:40.532 "rw_ios_per_sec": 0, 00:08:40.532 "rw_mbytes_per_sec": 0, 00:08:40.532 "r_mbytes_per_sec": 0, 00:08:40.532 "w_mbytes_per_sec": 0 00:08:40.532 }, 00:08:40.532 "claimed": false, 00:08:40.532 "zoned": false, 00:08:40.532 "supported_io_types": { 00:08:40.532 "read": true, 00:08:40.532 "write": true, 00:08:40.532 "unmap": false, 00:08:40.532 "flush": false, 00:08:40.532 "reset": true, 00:08:40.532 "nvme_admin": false, 00:08:40.532 "nvme_io": false, 00:08:40.532 "nvme_io_md": false, 00:08:40.532 "write_zeroes": true, 00:08:40.532 "zcopy": false, 00:08:40.532 "get_zone_info": false, 00:08:40.532 "zone_management": false, 00:08:40.532 "zone_append": false, 00:08:40.532 "compare": false, 00:08:40.532 "compare_and_write": false, 00:08:40.532 "abort": false, 00:08:40.532 "seek_hole": false, 00:08:40.532 "seek_data": false, 00:08:40.532 "copy": false, 00:08:40.532 "nvme_iov_md": false 00:08:40.532 }, 00:08:40.532 "memory_domains": [ 00:08:40.532 { 00:08:40.532 "dma_device_id": "system", 00:08:40.532 "dma_device_type": 1 00:08:40.532 }, 00:08:40.532 { 00:08:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.532 "dma_device_type": 2 00:08:40.532 }, 00:08:40.532 { 00:08:40.533 "dma_device_id": "system", 00:08:40.533 "dma_device_type": 1 00:08:40.533 }, 00:08:40.533 { 00:08:40.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.533 "dma_device_type": 2 00:08:40.533 }, 00:08:40.533 { 00:08:40.533 "dma_device_id": "system", 00:08:40.533 "dma_device_type": 1 00:08:40.533 }, 00:08:40.533 { 00:08:40.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.533 "dma_device_type": 2 00:08:40.533 } 00:08:40.533 ], 00:08:40.533 "driver_specific": { 00:08:40.533 "raid": { 00:08:40.533 "uuid": "64623a2a-5c79-4f73-8d08-6f9ead390507", 00:08:40.533 "strip_size_kb": 0, 00:08:40.533 "state": "online", 00:08:40.533 "raid_level": "raid1", 00:08:40.533 "superblock": false, 00:08:40.533 "num_base_bdevs": 3, 00:08:40.533 "num_base_bdevs_discovered": 3, 00:08:40.533 "num_base_bdevs_operational": 3, 00:08:40.533 "base_bdevs_list": [ 00:08:40.533 { 00:08:40.533 "name": "NewBaseBdev", 00:08:40.533 "uuid": "2a8b4348-462a-407c-add5-d6d903c4c885", 00:08:40.533 "is_configured": true, 00:08:40.533 "data_offset": 0, 00:08:40.533 "data_size": 65536 00:08:40.533 }, 00:08:40.533 { 00:08:40.533 "name": "BaseBdev2", 00:08:40.533 "uuid": "1a8d4995-57d7-4c0c-9177-7061731d8ca0", 00:08:40.533 "is_configured": true, 00:08:40.533 "data_offset": 0, 00:08:40.533 "data_size": 65536 00:08:40.533 }, 00:08:40.533 { 00:08:40.533 "name": "BaseBdev3", 00:08:40.533 "uuid": "46d3c218-49da-4567-9920-b9157b978ede", 00:08:40.533 "is_configured": true, 00:08:40.533 "data_offset": 0, 00:08:40.533 "data_size": 65536 00:08:40.533 } 00:08:40.533 ] 00:08:40.533 } 00:08:40.533 } 00:08:40.533 }' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:40.533 BaseBdev2 00:08:40.533 BaseBdev3' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.533 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 [2024-12-07 01:52:45.986573] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:40.533 [2024-12-07 01:52:45.986602] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.533 [2024-12-07 01:52:45.986687] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.533 [2024-12-07 01:52:45.986935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.533 [2024-12-07 01:52:45.986950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:40.793 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.793 01:52:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78189 00:08:40.793 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 78189 ']' 00:08:40.793 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 78189 00:08:40.793 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:40.793 01:52:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.793 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78189 00:08:40.793 killing process with pid 78189 00:08:40.793 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.793 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.793 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78189' 00:08:40.793 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 78189 00:08:40.793 [2024-12-07 01:52:46.028238] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.793 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 78189 00:08:40.793 [2024-12-07 01:52:46.058916] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.052 01:52:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:41.052 ************************************ 00:08:41.052 END TEST raid_state_function_test 00:08:41.052 ************************************ 00:08:41.052 00:08:41.052 real 0m8.739s 00:08:41.052 user 0m15.009s 00:08:41.052 sys 0m1.702s 00:08:41.052 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.052 01:52:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.052 01:52:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:41.053 01:52:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:41.053 01:52:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.053 01:52:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.053 ************************************ 00:08:41.053 START TEST raid_state_function_test_sb 00:08:41.053 ************************************ 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78789 00:08:41.053 Process raid pid: 78789 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78789' 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78789 00:08:41.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78789 ']' 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.053 01:52:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.053 [2024-12-07 01:52:46.458471] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:41.053 [2024-12-07 01:52:46.458594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.313 [2024-12-07 01:52:46.602867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.313 [2024-12-07 01:52:46.647258] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.313 [2024-12-07 01:52:46.688491] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.313 [2024-12-07 01:52:46.688540] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.882 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.882 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:41.882 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:41.882 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.882 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.882 [2024-12-07 01:52:47.293316] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.882 [2024-12-07 01:52:47.293402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.882 [2024-12-07 01:52:47.293418] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.882 [2024-12-07 01:52:47.293427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.883 [2024-12-07 01:52:47.293434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:41.883 [2024-12-07 01:52:47.293446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.883 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.142 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.142 "name": "Existed_Raid", 00:08:42.142 "uuid": "4216382c-df78-43f9-bece-fbb6e881e8e3", 00:08:42.142 "strip_size_kb": 0, 00:08:42.142 "state": "configuring", 00:08:42.142 "raid_level": "raid1", 00:08:42.142 "superblock": true, 00:08:42.142 "num_base_bdevs": 3, 00:08:42.142 "num_base_bdevs_discovered": 0, 00:08:42.142 "num_base_bdevs_operational": 3, 00:08:42.142 "base_bdevs_list": [ 00:08:42.142 { 00:08:42.142 "name": "BaseBdev1", 00:08:42.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.142 "is_configured": false, 00:08:42.142 "data_offset": 0, 00:08:42.142 "data_size": 0 00:08:42.142 }, 00:08:42.142 { 00:08:42.142 "name": "BaseBdev2", 00:08:42.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.142 "is_configured": false, 00:08:42.142 "data_offset": 0, 00:08:42.142 "data_size": 0 00:08:42.142 }, 00:08:42.142 { 00:08:42.142 "name": "BaseBdev3", 00:08:42.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.142 "is_configured": false, 00:08:42.142 "data_offset": 0, 00:08:42.142 "data_size": 0 00:08:42.142 } 00:08:42.142 ] 00:08:42.142 }' 00:08:42.142 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.142 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 [2024-12-07 01:52:47.668584] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.403 [2024-12-07 01:52:47.668671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 [2024-12-07 01:52:47.680574] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.403 [2024-12-07 01:52:47.680612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.403 [2024-12-07 01:52:47.680620] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.403 [2024-12-07 01:52:47.680645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.403 [2024-12-07 01:52:47.680651] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.403 [2024-12-07 01:52:47.680660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 [2024-12-07 01:52:47.701236] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.403 BaseBdev1 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.403 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.403 [ 00:08:42.403 { 00:08:42.403 "name": "BaseBdev1", 00:08:42.403 "aliases": [ 00:08:42.403 "755674e0-5cb2-43ea-99f9-baf7d3ade69d" 00:08:42.403 ], 00:08:42.403 "product_name": "Malloc disk", 00:08:42.403 "block_size": 512, 00:08:42.403 "num_blocks": 65536, 00:08:42.403 "uuid": "755674e0-5cb2-43ea-99f9-baf7d3ade69d", 00:08:42.403 "assigned_rate_limits": { 00:08:42.404 "rw_ios_per_sec": 0, 00:08:42.404 "rw_mbytes_per_sec": 0, 00:08:42.404 "r_mbytes_per_sec": 0, 00:08:42.404 "w_mbytes_per_sec": 0 00:08:42.404 }, 00:08:42.404 "claimed": true, 00:08:42.404 "claim_type": "exclusive_write", 00:08:42.404 "zoned": false, 00:08:42.404 "supported_io_types": { 00:08:42.404 "read": true, 00:08:42.404 "write": true, 00:08:42.404 "unmap": true, 00:08:42.404 "flush": true, 00:08:42.404 "reset": true, 00:08:42.404 "nvme_admin": false, 00:08:42.404 "nvme_io": false, 00:08:42.404 "nvme_io_md": false, 00:08:42.404 "write_zeroes": true, 00:08:42.404 "zcopy": true, 00:08:42.404 "get_zone_info": false, 00:08:42.404 "zone_management": false, 00:08:42.404 "zone_append": false, 00:08:42.404 "compare": false, 00:08:42.404 "compare_and_write": false, 00:08:42.404 "abort": true, 00:08:42.404 "seek_hole": false, 00:08:42.404 "seek_data": false, 00:08:42.404 "copy": true, 00:08:42.404 "nvme_iov_md": false 00:08:42.404 }, 00:08:42.404 "memory_domains": [ 00:08:42.404 { 00:08:42.404 "dma_device_id": "system", 00:08:42.404 "dma_device_type": 1 00:08:42.404 }, 00:08:42.404 { 00:08:42.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.404 "dma_device_type": 2 00:08:42.404 } 00:08:42.404 ], 00:08:42.404 "driver_specific": {} 00:08:42.404 } 00:08:42.404 ] 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.404 "name": "Existed_Raid", 00:08:42.404 "uuid": "a5be62b7-b15c-4c34-9e56-bc3b1f34fa58", 00:08:42.404 "strip_size_kb": 0, 00:08:42.404 "state": "configuring", 00:08:42.404 "raid_level": "raid1", 00:08:42.404 "superblock": true, 00:08:42.404 "num_base_bdevs": 3, 00:08:42.404 "num_base_bdevs_discovered": 1, 00:08:42.404 "num_base_bdevs_operational": 3, 00:08:42.404 "base_bdevs_list": [ 00:08:42.404 { 00:08:42.404 "name": "BaseBdev1", 00:08:42.404 "uuid": "755674e0-5cb2-43ea-99f9-baf7d3ade69d", 00:08:42.404 "is_configured": true, 00:08:42.404 "data_offset": 2048, 00:08:42.404 "data_size": 63488 00:08:42.404 }, 00:08:42.404 { 00:08:42.404 "name": "BaseBdev2", 00:08:42.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.404 "is_configured": false, 00:08:42.404 "data_offset": 0, 00:08:42.404 "data_size": 0 00:08:42.404 }, 00:08:42.404 { 00:08:42.404 "name": "BaseBdev3", 00:08:42.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.404 "is_configured": false, 00:08:42.404 "data_offset": 0, 00:08:42.404 "data_size": 0 00:08:42.404 } 00:08:42.404 ] 00:08:42.404 }' 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.404 01:52:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.973 [2024-12-07 01:52:48.168469] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.973 [2024-12-07 01:52:48.168559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.973 [2024-12-07 01:52:48.176506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.973 [2024-12-07 01:52:48.178375] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.973 [2024-12-07 01:52:48.178417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.973 [2024-12-07 01:52:48.178427] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:42.973 [2024-12-07 01:52:48.178438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.973 "name": "Existed_Raid", 00:08:42.973 "uuid": "8c3ab196-92b1-4522-a90b-e35bfbbc4893", 00:08:42.973 "strip_size_kb": 0, 00:08:42.973 "state": "configuring", 00:08:42.973 "raid_level": "raid1", 00:08:42.973 "superblock": true, 00:08:42.973 "num_base_bdevs": 3, 00:08:42.973 "num_base_bdevs_discovered": 1, 00:08:42.973 "num_base_bdevs_operational": 3, 00:08:42.973 "base_bdevs_list": [ 00:08:42.973 { 00:08:42.973 "name": "BaseBdev1", 00:08:42.973 "uuid": "755674e0-5cb2-43ea-99f9-baf7d3ade69d", 00:08:42.973 "is_configured": true, 00:08:42.973 "data_offset": 2048, 00:08:42.973 "data_size": 63488 00:08:42.973 }, 00:08:42.973 { 00:08:42.973 "name": "BaseBdev2", 00:08:42.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.973 "is_configured": false, 00:08:42.973 "data_offset": 0, 00:08:42.973 "data_size": 0 00:08:42.973 }, 00:08:42.973 { 00:08:42.973 "name": "BaseBdev3", 00:08:42.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.973 "is_configured": false, 00:08:42.973 "data_offset": 0, 00:08:42.973 "data_size": 0 00:08:42.973 } 00:08:42.973 ] 00:08:42.973 }' 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.973 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 [2024-12-07 01:52:48.658502] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.234 BaseBdev2 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.234 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.234 [ 00:08:43.234 { 00:08:43.234 "name": "BaseBdev2", 00:08:43.234 "aliases": [ 00:08:43.234 "950d91b1-e4c7-46b6-ab21-e82e8eab71ae" 00:08:43.234 ], 00:08:43.234 "product_name": "Malloc disk", 00:08:43.234 "block_size": 512, 00:08:43.234 "num_blocks": 65536, 00:08:43.234 "uuid": "950d91b1-e4c7-46b6-ab21-e82e8eab71ae", 00:08:43.234 "assigned_rate_limits": { 00:08:43.234 "rw_ios_per_sec": 0, 00:08:43.234 "rw_mbytes_per_sec": 0, 00:08:43.234 "r_mbytes_per_sec": 0, 00:08:43.234 "w_mbytes_per_sec": 0 00:08:43.234 }, 00:08:43.234 "claimed": true, 00:08:43.234 "claim_type": "exclusive_write", 00:08:43.234 "zoned": false, 00:08:43.234 "supported_io_types": { 00:08:43.234 "read": true, 00:08:43.234 "write": true, 00:08:43.234 "unmap": true, 00:08:43.234 "flush": true, 00:08:43.234 "reset": true, 00:08:43.234 "nvme_admin": false, 00:08:43.234 "nvme_io": false, 00:08:43.234 "nvme_io_md": false, 00:08:43.234 "write_zeroes": true, 00:08:43.234 "zcopy": true, 00:08:43.234 "get_zone_info": false, 00:08:43.234 "zone_management": false, 00:08:43.234 "zone_append": false, 00:08:43.234 "compare": false, 00:08:43.234 "compare_and_write": false, 00:08:43.234 "abort": true, 00:08:43.494 "seek_hole": false, 00:08:43.494 "seek_data": false, 00:08:43.494 "copy": true, 00:08:43.494 "nvme_iov_md": false 00:08:43.494 }, 00:08:43.494 "memory_domains": [ 00:08:43.494 { 00:08:43.494 "dma_device_id": "system", 00:08:43.494 "dma_device_type": 1 00:08:43.494 }, 00:08:43.494 { 00:08:43.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.494 "dma_device_type": 2 00:08:43.494 } 00:08:43.494 ], 00:08:43.494 "driver_specific": {} 00:08:43.494 } 00:08:43.494 ] 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.494 "name": "Existed_Raid", 00:08:43.494 "uuid": "8c3ab196-92b1-4522-a90b-e35bfbbc4893", 00:08:43.494 "strip_size_kb": 0, 00:08:43.494 "state": "configuring", 00:08:43.494 "raid_level": "raid1", 00:08:43.494 "superblock": true, 00:08:43.494 "num_base_bdevs": 3, 00:08:43.494 "num_base_bdevs_discovered": 2, 00:08:43.494 "num_base_bdevs_operational": 3, 00:08:43.494 "base_bdevs_list": [ 00:08:43.494 { 00:08:43.494 "name": "BaseBdev1", 00:08:43.494 "uuid": "755674e0-5cb2-43ea-99f9-baf7d3ade69d", 00:08:43.494 "is_configured": true, 00:08:43.494 "data_offset": 2048, 00:08:43.494 "data_size": 63488 00:08:43.494 }, 00:08:43.494 { 00:08:43.494 "name": "BaseBdev2", 00:08:43.494 "uuid": "950d91b1-e4c7-46b6-ab21-e82e8eab71ae", 00:08:43.494 "is_configured": true, 00:08:43.494 "data_offset": 2048, 00:08:43.494 "data_size": 63488 00:08:43.494 }, 00:08:43.494 { 00:08:43.494 "name": "BaseBdev3", 00:08:43.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.494 "is_configured": false, 00:08:43.494 "data_offset": 0, 00:08:43.494 "data_size": 0 00:08:43.494 } 00:08:43.494 ] 00:08:43.494 }' 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.494 01:52:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.754 [2024-12-07 01:52:49.160507] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:43.754 BaseBdev3 00:08:43.754 [2024-12-07 01:52:49.160807] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:43.754 [2024-12-07 01:52:49.160829] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:43.754 [2024-12-07 01:52:49.161081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:43.754 [2024-12-07 01:52:49.161206] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:43.754 [2024-12-07 01:52:49.161216] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:43.754 [2024-12-07 01:52:49.161335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.754 [ 00:08:43.754 { 00:08:43.754 "name": "BaseBdev3", 00:08:43.754 "aliases": [ 00:08:43.754 "6af083bc-4f58-4f62-affc-8e6d98c06f71" 00:08:43.754 ], 00:08:43.754 "product_name": "Malloc disk", 00:08:43.754 "block_size": 512, 00:08:43.754 "num_blocks": 65536, 00:08:43.754 "uuid": "6af083bc-4f58-4f62-affc-8e6d98c06f71", 00:08:43.754 "assigned_rate_limits": { 00:08:43.754 "rw_ios_per_sec": 0, 00:08:43.754 "rw_mbytes_per_sec": 0, 00:08:43.754 "r_mbytes_per_sec": 0, 00:08:43.754 "w_mbytes_per_sec": 0 00:08:43.754 }, 00:08:43.754 "claimed": true, 00:08:43.754 "claim_type": "exclusive_write", 00:08:43.754 "zoned": false, 00:08:43.754 "supported_io_types": { 00:08:43.754 "read": true, 00:08:43.754 "write": true, 00:08:43.754 "unmap": true, 00:08:43.754 "flush": true, 00:08:43.754 "reset": true, 00:08:43.754 "nvme_admin": false, 00:08:43.754 "nvme_io": false, 00:08:43.754 "nvme_io_md": false, 00:08:43.754 "write_zeroes": true, 00:08:43.754 "zcopy": true, 00:08:43.754 "get_zone_info": false, 00:08:43.754 "zone_management": false, 00:08:43.754 "zone_append": false, 00:08:43.754 "compare": false, 00:08:43.754 "compare_and_write": false, 00:08:43.754 "abort": true, 00:08:43.754 "seek_hole": false, 00:08:43.754 "seek_data": false, 00:08:43.754 "copy": true, 00:08:43.754 "nvme_iov_md": false 00:08:43.754 }, 00:08:43.754 "memory_domains": [ 00:08:43.754 { 00:08:43.754 "dma_device_id": "system", 00:08:43.754 "dma_device_type": 1 00:08:43.754 }, 00:08:43.754 { 00:08:43.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.754 "dma_device_type": 2 00:08:43.754 } 00:08:43.754 ], 00:08:43.754 "driver_specific": {} 00:08:43.754 } 00:08:43.754 ] 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.754 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.013 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.013 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.013 "name": "Existed_Raid", 00:08:44.013 "uuid": "8c3ab196-92b1-4522-a90b-e35bfbbc4893", 00:08:44.013 "strip_size_kb": 0, 00:08:44.013 "state": "online", 00:08:44.013 "raid_level": "raid1", 00:08:44.013 "superblock": true, 00:08:44.013 "num_base_bdevs": 3, 00:08:44.013 "num_base_bdevs_discovered": 3, 00:08:44.013 "num_base_bdevs_operational": 3, 00:08:44.013 "base_bdevs_list": [ 00:08:44.013 { 00:08:44.013 "name": "BaseBdev1", 00:08:44.013 "uuid": "755674e0-5cb2-43ea-99f9-baf7d3ade69d", 00:08:44.013 "is_configured": true, 00:08:44.013 "data_offset": 2048, 00:08:44.013 "data_size": 63488 00:08:44.013 }, 00:08:44.013 { 00:08:44.013 "name": "BaseBdev2", 00:08:44.013 "uuid": "950d91b1-e4c7-46b6-ab21-e82e8eab71ae", 00:08:44.013 "is_configured": true, 00:08:44.013 "data_offset": 2048, 00:08:44.013 "data_size": 63488 00:08:44.013 }, 00:08:44.013 { 00:08:44.013 "name": "BaseBdev3", 00:08:44.013 "uuid": "6af083bc-4f58-4f62-affc-8e6d98c06f71", 00:08:44.013 "is_configured": true, 00:08:44.013 "data_offset": 2048, 00:08:44.013 "data_size": 63488 00:08:44.013 } 00:08:44.013 ] 00:08:44.013 }' 00:08:44.013 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.013 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.273 [2024-12-07 01:52:49.664041] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.273 "name": "Existed_Raid", 00:08:44.273 "aliases": [ 00:08:44.273 "8c3ab196-92b1-4522-a90b-e35bfbbc4893" 00:08:44.273 ], 00:08:44.273 "product_name": "Raid Volume", 00:08:44.273 "block_size": 512, 00:08:44.273 "num_blocks": 63488, 00:08:44.273 "uuid": "8c3ab196-92b1-4522-a90b-e35bfbbc4893", 00:08:44.273 "assigned_rate_limits": { 00:08:44.273 "rw_ios_per_sec": 0, 00:08:44.273 "rw_mbytes_per_sec": 0, 00:08:44.273 "r_mbytes_per_sec": 0, 00:08:44.273 "w_mbytes_per_sec": 0 00:08:44.273 }, 00:08:44.273 "claimed": false, 00:08:44.273 "zoned": false, 00:08:44.273 "supported_io_types": { 00:08:44.273 "read": true, 00:08:44.273 "write": true, 00:08:44.273 "unmap": false, 00:08:44.273 "flush": false, 00:08:44.273 "reset": true, 00:08:44.273 "nvme_admin": false, 00:08:44.273 "nvme_io": false, 00:08:44.273 "nvme_io_md": false, 00:08:44.273 "write_zeroes": true, 00:08:44.273 "zcopy": false, 00:08:44.273 "get_zone_info": false, 00:08:44.273 "zone_management": false, 00:08:44.273 "zone_append": false, 00:08:44.273 "compare": false, 00:08:44.273 "compare_and_write": false, 00:08:44.273 "abort": false, 00:08:44.273 "seek_hole": false, 00:08:44.273 "seek_data": false, 00:08:44.273 "copy": false, 00:08:44.273 "nvme_iov_md": false 00:08:44.273 }, 00:08:44.273 "memory_domains": [ 00:08:44.273 { 00:08:44.273 "dma_device_id": "system", 00:08:44.273 "dma_device_type": 1 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.273 "dma_device_type": 2 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "dma_device_id": "system", 00:08:44.273 "dma_device_type": 1 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.273 "dma_device_type": 2 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "dma_device_id": "system", 00:08:44.273 "dma_device_type": 1 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.273 "dma_device_type": 2 00:08:44.273 } 00:08:44.273 ], 00:08:44.273 "driver_specific": { 00:08:44.273 "raid": { 00:08:44.273 "uuid": "8c3ab196-92b1-4522-a90b-e35bfbbc4893", 00:08:44.273 "strip_size_kb": 0, 00:08:44.273 "state": "online", 00:08:44.273 "raid_level": "raid1", 00:08:44.273 "superblock": true, 00:08:44.273 "num_base_bdevs": 3, 00:08:44.273 "num_base_bdevs_discovered": 3, 00:08:44.273 "num_base_bdevs_operational": 3, 00:08:44.273 "base_bdevs_list": [ 00:08:44.273 { 00:08:44.273 "name": "BaseBdev1", 00:08:44.273 "uuid": "755674e0-5cb2-43ea-99f9-baf7d3ade69d", 00:08:44.273 "is_configured": true, 00:08:44.273 "data_offset": 2048, 00:08:44.273 "data_size": 63488 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "name": "BaseBdev2", 00:08:44.273 "uuid": "950d91b1-e4c7-46b6-ab21-e82e8eab71ae", 00:08:44.273 "is_configured": true, 00:08:44.273 "data_offset": 2048, 00:08:44.273 "data_size": 63488 00:08:44.273 }, 00:08:44.273 { 00:08:44.273 "name": "BaseBdev3", 00:08:44.273 "uuid": "6af083bc-4f58-4f62-affc-8e6d98c06f71", 00:08:44.273 "is_configured": true, 00:08:44.273 "data_offset": 2048, 00:08:44.273 "data_size": 63488 00:08:44.273 } 00:08:44.273 ] 00:08:44.273 } 00:08:44.273 } 00:08:44.273 }' 00:08:44.273 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.534 BaseBdev2 00:08:44.534 BaseBdev3' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.534 [2024-12-07 01:52:49.963248] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.534 01:52:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.795 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.795 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.795 "name": "Existed_Raid", 00:08:44.795 "uuid": "8c3ab196-92b1-4522-a90b-e35bfbbc4893", 00:08:44.795 "strip_size_kb": 0, 00:08:44.795 "state": "online", 00:08:44.795 "raid_level": "raid1", 00:08:44.795 "superblock": true, 00:08:44.795 "num_base_bdevs": 3, 00:08:44.795 "num_base_bdevs_discovered": 2, 00:08:44.795 "num_base_bdevs_operational": 2, 00:08:44.795 "base_bdevs_list": [ 00:08:44.795 { 00:08:44.795 "name": null, 00:08:44.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.795 "is_configured": false, 00:08:44.795 "data_offset": 0, 00:08:44.795 "data_size": 63488 00:08:44.795 }, 00:08:44.795 { 00:08:44.795 "name": "BaseBdev2", 00:08:44.795 "uuid": "950d91b1-e4c7-46b6-ab21-e82e8eab71ae", 00:08:44.795 "is_configured": true, 00:08:44.795 "data_offset": 2048, 00:08:44.795 "data_size": 63488 00:08:44.795 }, 00:08:44.795 { 00:08:44.795 "name": "BaseBdev3", 00:08:44.795 "uuid": "6af083bc-4f58-4f62-affc-8e6d98c06f71", 00:08:44.795 "is_configured": true, 00:08:44.795 "data_offset": 2048, 00:08:44.795 "data_size": 63488 00:08:44.795 } 00:08:44.795 ] 00:08:44.795 }' 00:08:44.795 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.795 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.055 [2024-12-07 01:52:50.489640] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.055 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 [2024-12-07 01:52:50.560763] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:45.315 [2024-12-07 01:52:50.560900] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.315 [2024-12-07 01:52:50.572207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.315 [2024-12-07 01:52:50.572265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.315 [2024-12-07 01:52:50.572282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.315 BaseBdev2 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:45.315 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 [ 00:08:45.316 { 00:08:45.316 "name": "BaseBdev2", 00:08:45.316 "aliases": [ 00:08:45.316 "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666" 00:08:45.316 ], 00:08:45.316 "product_name": "Malloc disk", 00:08:45.316 "block_size": 512, 00:08:45.316 "num_blocks": 65536, 00:08:45.316 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:45.316 "assigned_rate_limits": { 00:08:45.316 "rw_ios_per_sec": 0, 00:08:45.316 "rw_mbytes_per_sec": 0, 00:08:45.316 "r_mbytes_per_sec": 0, 00:08:45.316 "w_mbytes_per_sec": 0 00:08:45.316 }, 00:08:45.316 "claimed": false, 00:08:45.316 "zoned": false, 00:08:45.316 "supported_io_types": { 00:08:45.316 "read": true, 00:08:45.316 "write": true, 00:08:45.316 "unmap": true, 00:08:45.316 "flush": true, 00:08:45.316 "reset": true, 00:08:45.316 "nvme_admin": false, 00:08:45.316 "nvme_io": false, 00:08:45.316 "nvme_io_md": false, 00:08:45.316 "write_zeroes": true, 00:08:45.316 "zcopy": true, 00:08:45.316 "get_zone_info": false, 00:08:45.316 "zone_management": false, 00:08:45.316 "zone_append": false, 00:08:45.316 "compare": false, 00:08:45.316 "compare_and_write": false, 00:08:45.316 "abort": true, 00:08:45.316 "seek_hole": false, 00:08:45.316 "seek_data": false, 00:08:45.316 "copy": true, 00:08:45.316 "nvme_iov_md": false 00:08:45.316 }, 00:08:45.316 "memory_domains": [ 00:08:45.316 { 00:08:45.316 "dma_device_id": "system", 00:08:45.316 "dma_device_type": 1 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.316 "dma_device_type": 2 00:08:45.316 } 00:08:45.316 ], 00:08:45.316 "driver_specific": {} 00:08:45.316 } 00:08:45.316 ] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 BaseBdev3 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 [ 00:08:45.316 { 00:08:45.316 "name": "BaseBdev3", 00:08:45.316 "aliases": [ 00:08:45.316 "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0" 00:08:45.316 ], 00:08:45.316 "product_name": "Malloc disk", 00:08:45.316 "block_size": 512, 00:08:45.316 "num_blocks": 65536, 00:08:45.316 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:45.316 "assigned_rate_limits": { 00:08:45.316 "rw_ios_per_sec": 0, 00:08:45.316 "rw_mbytes_per_sec": 0, 00:08:45.316 "r_mbytes_per_sec": 0, 00:08:45.316 "w_mbytes_per_sec": 0 00:08:45.316 }, 00:08:45.316 "claimed": false, 00:08:45.316 "zoned": false, 00:08:45.316 "supported_io_types": { 00:08:45.316 "read": true, 00:08:45.316 "write": true, 00:08:45.316 "unmap": true, 00:08:45.316 "flush": true, 00:08:45.316 "reset": true, 00:08:45.316 "nvme_admin": false, 00:08:45.316 "nvme_io": false, 00:08:45.316 "nvme_io_md": false, 00:08:45.316 "write_zeroes": true, 00:08:45.316 "zcopy": true, 00:08:45.316 "get_zone_info": false, 00:08:45.316 "zone_management": false, 00:08:45.316 "zone_append": false, 00:08:45.316 "compare": false, 00:08:45.316 "compare_and_write": false, 00:08:45.316 "abort": true, 00:08:45.316 "seek_hole": false, 00:08:45.316 "seek_data": false, 00:08:45.316 "copy": true, 00:08:45.316 "nvme_iov_md": false 00:08:45.316 }, 00:08:45.316 "memory_domains": [ 00:08:45.316 { 00:08:45.316 "dma_device_id": "system", 00:08:45.316 "dma_device_type": 1 00:08:45.316 }, 00:08:45.316 { 00:08:45.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.316 "dma_device_type": 2 00:08:45.316 } 00:08:45.316 ], 00:08:45.316 "driver_specific": {} 00:08:45.316 } 00:08:45.316 ] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 [2024-12-07 01:52:50.735547] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.316 [2024-12-07 01:52:50.735628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.316 [2024-12-07 01:52:50.735676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.316 [2024-12-07 01:52:50.737482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.316 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.581 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.581 "name": "Existed_Raid", 00:08:45.581 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:45.581 "strip_size_kb": 0, 00:08:45.581 "state": "configuring", 00:08:45.581 "raid_level": "raid1", 00:08:45.581 "superblock": true, 00:08:45.581 "num_base_bdevs": 3, 00:08:45.581 "num_base_bdevs_discovered": 2, 00:08:45.581 "num_base_bdevs_operational": 3, 00:08:45.581 "base_bdevs_list": [ 00:08:45.581 { 00:08:45.581 "name": "BaseBdev1", 00:08:45.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.581 "is_configured": false, 00:08:45.581 "data_offset": 0, 00:08:45.581 "data_size": 0 00:08:45.581 }, 00:08:45.581 { 00:08:45.581 "name": "BaseBdev2", 00:08:45.581 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:45.581 "is_configured": true, 00:08:45.581 "data_offset": 2048, 00:08:45.581 "data_size": 63488 00:08:45.581 }, 00:08:45.581 { 00:08:45.581 "name": "BaseBdev3", 00:08:45.581 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:45.581 "is_configured": true, 00:08:45.581 "data_offset": 2048, 00:08:45.581 "data_size": 63488 00:08:45.581 } 00:08:45.581 ] 00:08:45.581 }' 00:08:45.581 01:52:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.581 01:52:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.840 [2024-12-07 01:52:51.138858] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.840 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.841 "name": "Existed_Raid", 00:08:45.841 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:45.841 "strip_size_kb": 0, 00:08:45.841 "state": "configuring", 00:08:45.841 "raid_level": "raid1", 00:08:45.841 "superblock": true, 00:08:45.841 "num_base_bdevs": 3, 00:08:45.841 "num_base_bdevs_discovered": 1, 00:08:45.841 "num_base_bdevs_operational": 3, 00:08:45.841 "base_bdevs_list": [ 00:08:45.841 { 00:08:45.841 "name": "BaseBdev1", 00:08:45.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.841 "is_configured": false, 00:08:45.841 "data_offset": 0, 00:08:45.841 "data_size": 0 00:08:45.841 }, 00:08:45.841 { 00:08:45.841 "name": null, 00:08:45.841 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:45.841 "is_configured": false, 00:08:45.841 "data_offset": 0, 00:08:45.841 "data_size": 63488 00:08:45.841 }, 00:08:45.841 { 00:08:45.841 "name": "BaseBdev3", 00:08:45.841 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:45.841 "is_configured": true, 00:08:45.841 "data_offset": 2048, 00:08:45.841 "data_size": 63488 00:08:45.841 } 00:08:45.841 ] 00:08:45.841 }' 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.841 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.408 [2024-12-07 01:52:51.652837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.408 BaseBdev1 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.408 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.409 [ 00:08:46.409 { 00:08:46.409 "name": "BaseBdev1", 00:08:46.409 "aliases": [ 00:08:46.409 "bf3e0625-28ab-4a54-b808-2684000176ec" 00:08:46.409 ], 00:08:46.409 "product_name": "Malloc disk", 00:08:46.409 "block_size": 512, 00:08:46.409 "num_blocks": 65536, 00:08:46.409 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:46.409 "assigned_rate_limits": { 00:08:46.409 "rw_ios_per_sec": 0, 00:08:46.409 "rw_mbytes_per_sec": 0, 00:08:46.409 "r_mbytes_per_sec": 0, 00:08:46.409 "w_mbytes_per_sec": 0 00:08:46.409 }, 00:08:46.409 "claimed": true, 00:08:46.409 "claim_type": "exclusive_write", 00:08:46.409 "zoned": false, 00:08:46.409 "supported_io_types": { 00:08:46.409 "read": true, 00:08:46.409 "write": true, 00:08:46.409 "unmap": true, 00:08:46.409 "flush": true, 00:08:46.409 "reset": true, 00:08:46.409 "nvme_admin": false, 00:08:46.409 "nvme_io": false, 00:08:46.409 "nvme_io_md": false, 00:08:46.409 "write_zeroes": true, 00:08:46.409 "zcopy": true, 00:08:46.409 "get_zone_info": false, 00:08:46.409 "zone_management": false, 00:08:46.409 "zone_append": false, 00:08:46.409 "compare": false, 00:08:46.409 "compare_and_write": false, 00:08:46.409 "abort": true, 00:08:46.409 "seek_hole": false, 00:08:46.409 "seek_data": false, 00:08:46.409 "copy": true, 00:08:46.409 "nvme_iov_md": false 00:08:46.409 }, 00:08:46.409 "memory_domains": [ 00:08:46.409 { 00:08:46.409 "dma_device_id": "system", 00:08:46.409 "dma_device_type": 1 00:08:46.409 }, 00:08:46.409 { 00:08:46.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.409 "dma_device_type": 2 00:08:46.409 } 00:08:46.409 ], 00:08:46.409 "driver_specific": {} 00:08:46.409 } 00:08:46.409 ] 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.409 "name": "Existed_Raid", 00:08:46.409 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:46.409 "strip_size_kb": 0, 00:08:46.409 "state": "configuring", 00:08:46.409 "raid_level": "raid1", 00:08:46.409 "superblock": true, 00:08:46.409 "num_base_bdevs": 3, 00:08:46.409 "num_base_bdevs_discovered": 2, 00:08:46.409 "num_base_bdevs_operational": 3, 00:08:46.409 "base_bdevs_list": [ 00:08:46.409 { 00:08:46.409 "name": "BaseBdev1", 00:08:46.409 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:46.409 "is_configured": true, 00:08:46.409 "data_offset": 2048, 00:08:46.409 "data_size": 63488 00:08:46.409 }, 00:08:46.409 { 00:08:46.409 "name": null, 00:08:46.409 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:46.409 "is_configured": false, 00:08:46.409 "data_offset": 0, 00:08:46.409 "data_size": 63488 00:08:46.409 }, 00:08:46.409 { 00:08:46.409 "name": "BaseBdev3", 00:08:46.409 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:46.409 "is_configured": true, 00:08:46.409 "data_offset": 2048, 00:08:46.409 "data_size": 63488 00:08:46.409 } 00:08:46.409 ] 00:08:46.409 }' 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.409 01:52:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.667 [2024-12-07 01:52:52.120103] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.667 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.926 "name": "Existed_Raid", 00:08:46.926 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:46.926 "strip_size_kb": 0, 00:08:46.926 "state": "configuring", 00:08:46.926 "raid_level": "raid1", 00:08:46.926 "superblock": true, 00:08:46.926 "num_base_bdevs": 3, 00:08:46.926 "num_base_bdevs_discovered": 1, 00:08:46.926 "num_base_bdevs_operational": 3, 00:08:46.926 "base_bdevs_list": [ 00:08:46.926 { 00:08:46.926 "name": "BaseBdev1", 00:08:46.926 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:46.926 "is_configured": true, 00:08:46.926 "data_offset": 2048, 00:08:46.926 "data_size": 63488 00:08:46.926 }, 00:08:46.926 { 00:08:46.926 "name": null, 00:08:46.926 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:46.926 "is_configured": false, 00:08:46.926 "data_offset": 0, 00:08:46.926 "data_size": 63488 00:08:46.926 }, 00:08:46.926 { 00:08:46.926 "name": null, 00:08:46.926 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:46.926 "is_configured": false, 00:08:46.926 "data_offset": 0, 00:08:46.926 "data_size": 63488 00:08:46.926 } 00:08:46.926 ] 00:08:46.926 }' 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.926 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 [2024-12-07 01:52:52.623252] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.186 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.445 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.445 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.445 "name": "Existed_Raid", 00:08:47.445 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:47.445 "strip_size_kb": 0, 00:08:47.445 "state": "configuring", 00:08:47.445 "raid_level": "raid1", 00:08:47.445 "superblock": true, 00:08:47.445 "num_base_bdevs": 3, 00:08:47.445 "num_base_bdevs_discovered": 2, 00:08:47.445 "num_base_bdevs_operational": 3, 00:08:47.445 "base_bdevs_list": [ 00:08:47.445 { 00:08:47.445 "name": "BaseBdev1", 00:08:47.445 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:47.445 "is_configured": true, 00:08:47.445 "data_offset": 2048, 00:08:47.445 "data_size": 63488 00:08:47.445 }, 00:08:47.445 { 00:08:47.445 "name": null, 00:08:47.445 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:47.445 "is_configured": false, 00:08:47.445 "data_offset": 0, 00:08:47.445 "data_size": 63488 00:08:47.445 }, 00:08:47.445 { 00:08:47.445 "name": "BaseBdev3", 00:08:47.445 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:47.445 "is_configured": true, 00:08:47.445 "data_offset": 2048, 00:08:47.445 "data_size": 63488 00:08:47.445 } 00:08:47.445 ] 00:08:47.445 }' 00:08:47.445 01:52:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.445 01:52:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.720 [2024-12-07 01:52:53.082684] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.720 "name": "Existed_Raid", 00:08:47.720 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:47.720 "strip_size_kb": 0, 00:08:47.720 "state": "configuring", 00:08:47.720 "raid_level": "raid1", 00:08:47.720 "superblock": true, 00:08:47.720 "num_base_bdevs": 3, 00:08:47.720 "num_base_bdevs_discovered": 1, 00:08:47.720 "num_base_bdevs_operational": 3, 00:08:47.720 "base_bdevs_list": [ 00:08:47.720 { 00:08:47.720 "name": null, 00:08:47.720 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:47.720 "is_configured": false, 00:08:47.720 "data_offset": 0, 00:08:47.720 "data_size": 63488 00:08:47.720 }, 00:08:47.720 { 00:08:47.720 "name": null, 00:08:47.720 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:47.720 "is_configured": false, 00:08:47.720 "data_offset": 0, 00:08:47.720 "data_size": 63488 00:08:47.720 }, 00:08:47.720 { 00:08:47.720 "name": "BaseBdev3", 00:08:47.720 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:47.720 "is_configured": true, 00:08:47.720 "data_offset": 2048, 00:08:47.720 "data_size": 63488 00:08:47.720 } 00:08:47.720 ] 00:08:47.720 }' 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.720 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.301 [2024-12-07 01:52:53.540286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.301 "name": "Existed_Raid", 00:08:48.301 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:48.301 "strip_size_kb": 0, 00:08:48.301 "state": "configuring", 00:08:48.301 "raid_level": "raid1", 00:08:48.301 "superblock": true, 00:08:48.301 "num_base_bdevs": 3, 00:08:48.301 "num_base_bdevs_discovered": 2, 00:08:48.301 "num_base_bdevs_operational": 3, 00:08:48.301 "base_bdevs_list": [ 00:08:48.301 { 00:08:48.301 "name": null, 00:08:48.301 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:48.301 "is_configured": false, 00:08:48.301 "data_offset": 0, 00:08:48.301 "data_size": 63488 00:08:48.301 }, 00:08:48.301 { 00:08:48.301 "name": "BaseBdev2", 00:08:48.301 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:48.301 "is_configured": true, 00:08:48.301 "data_offset": 2048, 00:08:48.301 "data_size": 63488 00:08:48.301 }, 00:08:48.301 { 00:08:48.301 "name": "BaseBdev3", 00:08:48.301 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:48.301 "is_configured": true, 00:08:48.301 "data_offset": 2048, 00:08:48.301 "data_size": 63488 00:08:48.301 } 00:08:48.301 ] 00:08:48.301 }' 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.301 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:48.560 01:52:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.560 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bf3e0625-28ab-4a54-b808-2684000176ec 00:08:48.560 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.560 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.560 [2024-12-07 01:52:54.018343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:48.560 [2024-12-07 01:52:54.018611] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:48.560 [2024-12-07 01:52:54.018696] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.560 NewBaseBdev 00:08:48.560 [2024-12-07 01:52:54.018977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:48.560 [2024-12-07 01:52:54.019108] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:48.560 [2024-12-07 01:52:54.019172] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:48.820 [2024-12-07 01:52:54.019307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.820 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.820 [ 00:08:48.820 { 00:08:48.820 "name": "NewBaseBdev", 00:08:48.820 "aliases": [ 00:08:48.820 "bf3e0625-28ab-4a54-b808-2684000176ec" 00:08:48.820 ], 00:08:48.820 "product_name": "Malloc disk", 00:08:48.820 "block_size": 512, 00:08:48.820 "num_blocks": 65536, 00:08:48.820 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:48.820 "assigned_rate_limits": { 00:08:48.820 "rw_ios_per_sec": 0, 00:08:48.820 "rw_mbytes_per_sec": 0, 00:08:48.820 "r_mbytes_per_sec": 0, 00:08:48.820 "w_mbytes_per_sec": 0 00:08:48.820 }, 00:08:48.820 "claimed": true, 00:08:48.820 "claim_type": "exclusive_write", 00:08:48.820 "zoned": false, 00:08:48.820 "supported_io_types": { 00:08:48.820 "read": true, 00:08:48.820 "write": true, 00:08:48.820 "unmap": true, 00:08:48.820 "flush": true, 00:08:48.820 "reset": true, 00:08:48.820 "nvme_admin": false, 00:08:48.820 "nvme_io": false, 00:08:48.820 "nvme_io_md": false, 00:08:48.820 "write_zeroes": true, 00:08:48.820 "zcopy": true, 00:08:48.820 "get_zone_info": false, 00:08:48.820 "zone_management": false, 00:08:48.820 "zone_append": false, 00:08:48.820 "compare": false, 00:08:48.820 "compare_and_write": false, 00:08:48.820 "abort": true, 00:08:48.820 "seek_hole": false, 00:08:48.820 "seek_data": false, 00:08:48.820 "copy": true, 00:08:48.820 "nvme_iov_md": false 00:08:48.820 }, 00:08:48.821 "memory_domains": [ 00:08:48.821 { 00:08:48.821 "dma_device_id": "system", 00:08:48.821 "dma_device_type": 1 00:08:48.821 }, 00:08:48.821 { 00:08:48.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.821 "dma_device_type": 2 00:08:48.821 } 00:08:48.821 ], 00:08:48.821 "driver_specific": {} 00:08:48.821 } 00:08:48.821 ] 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.821 "name": "Existed_Raid", 00:08:48.821 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:48.821 "strip_size_kb": 0, 00:08:48.821 "state": "online", 00:08:48.821 "raid_level": "raid1", 00:08:48.821 "superblock": true, 00:08:48.821 "num_base_bdevs": 3, 00:08:48.821 "num_base_bdevs_discovered": 3, 00:08:48.821 "num_base_bdevs_operational": 3, 00:08:48.821 "base_bdevs_list": [ 00:08:48.821 { 00:08:48.821 "name": "NewBaseBdev", 00:08:48.821 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:48.821 "is_configured": true, 00:08:48.821 "data_offset": 2048, 00:08:48.821 "data_size": 63488 00:08:48.821 }, 00:08:48.821 { 00:08:48.821 "name": "BaseBdev2", 00:08:48.821 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:48.821 "is_configured": true, 00:08:48.821 "data_offset": 2048, 00:08:48.821 "data_size": 63488 00:08:48.821 }, 00:08:48.821 { 00:08:48.821 "name": "BaseBdev3", 00:08:48.821 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:48.821 "is_configured": true, 00:08:48.821 "data_offset": 2048, 00:08:48.821 "data_size": 63488 00:08:48.821 } 00:08:48.821 ] 00:08:48.821 }' 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.821 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.079 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.079 [2024-12-07 01:52:54.481890] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.080 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.080 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.080 "name": "Existed_Raid", 00:08:49.080 "aliases": [ 00:08:49.080 "a2480007-e85e-4da8-8ed1-70d710a516df" 00:08:49.080 ], 00:08:49.080 "product_name": "Raid Volume", 00:08:49.080 "block_size": 512, 00:08:49.080 "num_blocks": 63488, 00:08:49.080 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:49.080 "assigned_rate_limits": { 00:08:49.080 "rw_ios_per_sec": 0, 00:08:49.080 "rw_mbytes_per_sec": 0, 00:08:49.080 "r_mbytes_per_sec": 0, 00:08:49.080 "w_mbytes_per_sec": 0 00:08:49.080 }, 00:08:49.080 "claimed": false, 00:08:49.080 "zoned": false, 00:08:49.080 "supported_io_types": { 00:08:49.080 "read": true, 00:08:49.080 "write": true, 00:08:49.080 "unmap": false, 00:08:49.080 "flush": false, 00:08:49.080 "reset": true, 00:08:49.080 "nvme_admin": false, 00:08:49.080 "nvme_io": false, 00:08:49.080 "nvme_io_md": false, 00:08:49.080 "write_zeroes": true, 00:08:49.080 "zcopy": false, 00:08:49.080 "get_zone_info": false, 00:08:49.080 "zone_management": false, 00:08:49.080 "zone_append": false, 00:08:49.080 "compare": false, 00:08:49.080 "compare_and_write": false, 00:08:49.080 "abort": false, 00:08:49.080 "seek_hole": false, 00:08:49.080 "seek_data": false, 00:08:49.080 "copy": false, 00:08:49.080 "nvme_iov_md": false 00:08:49.080 }, 00:08:49.080 "memory_domains": [ 00:08:49.080 { 00:08:49.080 "dma_device_id": "system", 00:08:49.080 "dma_device_type": 1 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.080 "dma_device_type": 2 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "dma_device_id": "system", 00:08:49.080 "dma_device_type": 1 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.080 "dma_device_type": 2 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "dma_device_id": "system", 00:08:49.080 "dma_device_type": 1 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.080 "dma_device_type": 2 00:08:49.080 } 00:08:49.080 ], 00:08:49.080 "driver_specific": { 00:08:49.080 "raid": { 00:08:49.080 "uuid": "a2480007-e85e-4da8-8ed1-70d710a516df", 00:08:49.080 "strip_size_kb": 0, 00:08:49.080 "state": "online", 00:08:49.080 "raid_level": "raid1", 00:08:49.080 "superblock": true, 00:08:49.080 "num_base_bdevs": 3, 00:08:49.080 "num_base_bdevs_discovered": 3, 00:08:49.080 "num_base_bdevs_operational": 3, 00:08:49.080 "base_bdevs_list": [ 00:08:49.080 { 00:08:49.080 "name": "NewBaseBdev", 00:08:49.080 "uuid": "bf3e0625-28ab-4a54-b808-2684000176ec", 00:08:49.080 "is_configured": true, 00:08:49.080 "data_offset": 2048, 00:08:49.080 "data_size": 63488 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "name": "BaseBdev2", 00:08:49.080 "uuid": "a5f8d5d3-ec65-4fe1-9f70-6c6763c46666", 00:08:49.080 "is_configured": true, 00:08:49.080 "data_offset": 2048, 00:08:49.080 "data_size": 63488 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "name": "BaseBdev3", 00:08:49.080 "uuid": "d0d7c829-bb6b-4c77-930b-8c86f9db3fa0", 00:08:49.080 "is_configured": true, 00:08:49.080 "data_offset": 2048, 00:08:49.080 "data_size": 63488 00:08:49.080 } 00:08:49.080 ] 00:08:49.080 } 00:08:49.080 } 00:08:49.080 }' 00:08:49.080 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:49.339 BaseBdev2 00:08:49.339 BaseBdev3' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.339 [2024-12-07 01:52:54.769070] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:49.339 [2024-12-07 01:52:54.769098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.339 [2024-12-07 01:52:54.769162] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.339 [2024-12-07 01:52:54.769399] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.339 [2024-12-07 01:52:54.769409] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78789 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78789 ']' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 78789 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.339 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78789 00:08:49.598 killing process with pid 78789 00:08:49.598 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.598 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.598 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78789' 00:08:49.598 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 78789 00:08:49.598 [2024-12-07 01:52:54.816636] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.598 01:52:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 78789 00:08:49.598 [2024-12-07 01:52:54.847119] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.857 01:52:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:49.857 00:08:49.857 real 0m8.717s 00:08:49.857 user 0m14.922s 00:08:49.857 sys 0m1.746s 00:08:49.857 ************************************ 00:08:49.857 END TEST raid_state_function_test_sb 00:08:49.857 ************************************ 00:08:49.857 01:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.857 01:52:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.857 01:52:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:49.857 01:52:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:49.857 01:52:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.857 01:52:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.857 ************************************ 00:08:49.857 START TEST raid_superblock_test 00:08:49.857 ************************************ 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79392 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79392 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 79392 ']' 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.857 01:52:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.857 [2024-12-07 01:52:55.240227] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:49.857 [2024-12-07 01:52:55.240350] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79392 ] 00:08:50.115 [2024-12-07 01:52:55.386140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.115 [2024-12-07 01:52:55.430000] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.115 [2024-12-07 01:52:55.470782] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.115 [2024-12-07 01:52:55.470819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.682 malloc1 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.682 [2024-12-07 01:52:56.080127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.682 [2024-12-07 01:52:56.080232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.682 [2024-12-07 01:52:56.080268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:50.682 [2024-12-07 01:52:56.080311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.682 [2024-12-07 01:52:56.082389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.682 [2024-12-07 01:52:56.082479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:50.682 pt1 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:50.682 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.683 malloc2 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.683 [2024-12-07 01:52:56.122167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:50.683 [2024-12-07 01:52:56.122251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.683 [2024-12-07 01:52:56.122299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.683 [2024-12-07 01:52:56.122328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.683 [2024-12-07 01:52:56.124360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.683 [2024-12-07 01:52:56.124447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:50.683 pt2 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.683 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 malloc3 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 [2024-12-07 01:52:56.150522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:50.941 [2024-12-07 01:52:56.150627] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.941 [2024-12-07 01:52:56.150669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:50.941 [2024-12-07 01:52:56.150699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.941 [2024-12-07 01:52:56.152797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.941 [2024-12-07 01:52:56.152870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:50.941 pt3 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 [2024-12-07 01:52:56.162566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:50.941 [2024-12-07 01:52:56.164492] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:50.941 [2024-12-07 01:52:56.164581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:50.941 [2024-12-07 01:52:56.164768] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:50.941 [2024-12-07 01:52:56.164814] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.941 [2024-12-07 01:52:56.165094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:50.941 [2024-12-07 01:52:56.165262] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:50.941 [2024-12-07 01:52:56.165308] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:50.941 [2024-12-07 01:52:56.165429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.941 "name": "raid_bdev1", 00:08:50.941 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:50.941 "strip_size_kb": 0, 00:08:50.941 "state": "online", 00:08:50.941 "raid_level": "raid1", 00:08:50.941 "superblock": true, 00:08:50.941 "num_base_bdevs": 3, 00:08:50.941 "num_base_bdevs_discovered": 3, 00:08:50.941 "num_base_bdevs_operational": 3, 00:08:50.941 "base_bdevs_list": [ 00:08:50.941 { 00:08:50.941 "name": "pt1", 00:08:50.941 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.941 "is_configured": true, 00:08:50.941 "data_offset": 2048, 00:08:50.941 "data_size": 63488 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "name": "pt2", 00:08:50.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.941 "is_configured": true, 00:08:50.941 "data_offset": 2048, 00:08:50.941 "data_size": 63488 00:08:50.941 }, 00:08:50.941 { 00:08:50.941 "name": "pt3", 00:08:50.941 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:50.941 "is_configured": true, 00:08:50.941 "data_offset": 2048, 00:08:50.941 "data_size": 63488 00:08:50.941 } 00:08:50.941 ] 00:08:50.941 }' 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.941 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.200 [2024-12-07 01:52:56.610063] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.200 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.200 "name": "raid_bdev1", 00:08:51.200 "aliases": [ 00:08:51.200 "3b04e7c5-749d-49cf-b99a-250f058692f0" 00:08:51.200 ], 00:08:51.200 "product_name": "Raid Volume", 00:08:51.200 "block_size": 512, 00:08:51.200 "num_blocks": 63488, 00:08:51.200 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:51.200 "assigned_rate_limits": { 00:08:51.200 "rw_ios_per_sec": 0, 00:08:51.200 "rw_mbytes_per_sec": 0, 00:08:51.200 "r_mbytes_per_sec": 0, 00:08:51.200 "w_mbytes_per_sec": 0 00:08:51.200 }, 00:08:51.200 "claimed": false, 00:08:51.200 "zoned": false, 00:08:51.200 "supported_io_types": { 00:08:51.200 "read": true, 00:08:51.200 "write": true, 00:08:51.200 "unmap": false, 00:08:51.200 "flush": false, 00:08:51.200 "reset": true, 00:08:51.200 "nvme_admin": false, 00:08:51.200 "nvme_io": false, 00:08:51.200 "nvme_io_md": false, 00:08:51.200 "write_zeroes": true, 00:08:51.200 "zcopy": false, 00:08:51.200 "get_zone_info": false, 00:08:51.200 "zone_management": false, 00:08:51.200 "zone_append": false, 00:08:51.201 "compare": false, 00:08:51.201 "compare_and_write": false, 00:08:51.201 "abort": false, 00:08:51.201 "seek_hole": false, 00:08:51.201 "seek_data": false, 00:08:51.201 "copy": false, 00:08:51.201 "nvme_iov_md": false 00:08:51.201 }, 00:08:51.201 "memory_domains": [ 00:08:51.201 { 00:08:51.201 "dma_device_id": "system", 00:08:51.201 "dma_device_type": 1 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.201 "dma_device_type": 2 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "dma_device_id": "system", 00:08:51.201 "dma_device_type": 1 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.201 "dma_device_type": 2 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "dma_device_id": "system", 00:08:51.201 "dma_device_type": 1 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.201 "dma_device_type": 2 00:08:51.201 } 00:08:51.201 ], 00:08:51.201 "driver_specific": { 00:08:51.201 "raid": { 00:08:51.201 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:51.201 "strip_size_kb": 0, 00:08:51.201 "state": "online", 00:08:51.201 "raid_level": "raid1", 00:08:51.201 "superblock": true, 00:08:51.201 "num_base_bdevs": 3, 00:08:51.201 "num_base_bdevs_discovered": 3, 00:08:51.201 "num_base_bdevs_operational": 3, 00:08:51.201 "base_bdevs_list": [ 00:08:51.201 { 00:08:51.201 "name": "pt1", 00:08:51.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.201 "is_configured": true, 00:08:51.201 "data_offset": 2048, 00:08:51.201 "data_size": 63488 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "name": "pt2", 00:08:51.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.201 "is_configured": true, 00:08:51.201 "data_offset": 2048, 00:08:51.201 "data_size": 63488 00:08:51.201 }, 00:08:51.201 { 00:08:51.201 "name": "pt3", 00:08:51.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.201 "is_configured": true, 00:08:51.201 "data_offset": 2048, 00:08:51.201 "data_size": 63488 00:08:51.201 } 00:08:51.201 ] 00:08:51.201 } 00:08:51.201 } 00:08:51.201 }' 00:08:51.201 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.459 pt2 00:08:51.459 pt3' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.459 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.459 [2024-12-07 01:52:56.913457] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3b04e7c5-749d-49cf-b99a-250f058692f0 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3b04e7c5-749d-49cf-b99a-250f058692f0 ']' 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 [2024-12-07 01:52:56.957141] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.726 [2024-12-07 01:52:56.957202] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.726 [2024-12-07 01:52:56.957313] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.726 [2024-12-07 01:52:56.957417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.726 [2024-12-07 01:52:56.957484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 01:52:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.726 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.726 [2024-12-07 01:52:57.096930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.726 [2024-12-07 01:52:57.098791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.726 [2024-12-07 01:52:57.098909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:51.726 [2024-12-07 01:52:57.098981] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.726 [2024-12-07 01:52:57.099072] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.726 [2024-12-07 01:52:57.099125] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:51.726 [2024-12-07 01:52:57.099184] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.726 [2024-12-07 01:52:57.099217] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:51.726 request: 00:08:51.726 { 00:08:51.726 "name": "raid_bdev1", 00:08:51.726 "raid_level": "raid1", 00:08:51.726 "base_bdevs": [ 00:08:51.726 "malloc1", 00:08:51.726 "malloc2", 00:08:51.726 "malloc3" 00:08:51.726 ], 00:08:51.726 "superblock": false, 00:08:51.726 "method": "bdev_raid_create", 00:08:51.726 "req_id": 1 00:08:51.726 } 00:08:51.726 Got JSON-RPC error response 00:08:51.726 response: 00:08:51.726 { 00:08:51.726 "code": -17, 00:08:51.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.727 } 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.727 [2024-12-07 01:52:57.160812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.727 [2024-12-07 01:52:57.160896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.727 [2024-12-07 01:52:57.160928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:51.727 [2024-12-07 01:52:57.160958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.727 [2024-12-07 01:52:57.163125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.727 [2024-12-07 01:52:57.163195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.727 [2024-12-07 01:52:57.163281] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.727 [2024-12-07 01:52:57.163331] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.727 pt1 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.727 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.985 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.985 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.985 "name": "raid_bdev1", 00:08:51.985 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:51.985 "strip_size_kb": 0, 00:08:51.985 "state": "configuring", 00:08:51.985 "raid_level": "raid1", 00:08:51.985 "superblock": true, 00:08:51.985 "num_base_bdevs": 3, 00:08:51.985 "num_base_bdevs_discovered": 1, 00:08:51.985 "num_base_bdevs_operational": 3, 00:08:51.985 "base_bdevs_list": [ 00:08:51.985 { 00:08:51.985 "name": "pt1", 00:08:51.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.986 "is_configured": true, 00:08:51.986 "data_offset": 2048, 00:08:51.986 "data_size": 63488 00:08:51.986 }, 00:08:51.986 { 00:08:51.986 "name": null, 00:08:51.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.986 "is_configured": false, 00:08:51.986 "data_offset": 2048, 00:08:51.986 "data_size": 63488 00:08:51.986 }, 00:08:51.986 { 00:08:51.986 "name": null, 00:08:51.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:51.986 "is_configured": false, 00:08:51.986 "data_offset": 2048, 00:08:51.986 "data_size": 63488 00:08:51.986 } 00:08:51.986 ] 00:08:51.986 }' 00:08:51.986 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.986 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.245 [2024-12-07 01:52:57.628048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.245 [2024-12-07 01:52:57.628153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.245 [2024-12-07 01:52:57.628194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:52.245 [2024-12-07 01:52:57.628228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.245 [2024-12-07 01:52:57.628636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.245 [2024-12-07 01:52:57.628705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.245 [2024-12-07 01:52:57.628808] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.245 [2024-12-07 01:52:57.628861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.245 pt2 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.245 [2024-12-07 01:52:57.640017] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:52.245 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.246 "name": "raid_bdev1", 00:08:52.246 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:52.246 "strip_size_kb": 0, 00:08:52.246 "state": "configuring", 00:08:52.246 "raid_level": "raid1", 00:08:52.246 "superblock": true, 00:08:52.246 "num_base_bdevs": 3, 00:08:52.246 "num_base_bdevs_discovered": 1, 00:08:52.246 "num_base_bdevs_operational": 3, 00:08:52.246 "base_bdevs_list": [ 00:08:52.246 { 00:08:52.246 "name": "pt1", 00:08:52.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.246 "is_configured": true, 00:08:52.246 "data_offset": 2048, 00:08:52.246 "data_size": 63488 00:08:52.246 }, 00:08:52.246 { 00:08:52.246 "name": null, 00:08:52.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.246 "is_configured": false, 00:08:52.246 "data_offset": 0, 00:08:52.246 "data_size": 63488 00:08:52.246 }, 00:08:52.246 { 00:08:52.246 "name": null, 00:08:52.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.246 "is_configured": false, 00:08:52.246 "data_offset": 2048, 00:08:52.246 "data_size": 63488 00:08:52.246 } 00:08:52.246 ] 00:08:52.246 }' 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.246 01:52:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.815 [2024-12-07 01:52:58.091249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.815 [2024-12-07 01:52:58.091346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.815 [2024-12-07 01:52:58.091383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:52.815 [2024-12-07 01:52:58.091410] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.815 [2024-12-07 01:52:58.091833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.815 [2024-12-07 01:52:58.091885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.815 [2024-12-07 01:52:58.091987] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.815 [2024-12-07 01:52:58.092035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.815 pt2 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.815 [2024-12-07 01:52:58.103210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:52.815 [2024-12-07 01:52:58.103284] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.815 [2024-12-07 01:52:58.103317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:52.815 [2024-12-07 01:52:58.103346] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.815 [2024-12-07 01:52:58.103687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.815 [2024-12-07 01:52:58.103737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:52.815 [2024-12-07 01:52:58.103818] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:52.815 [2024-12-07 01:52:58.103872] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:52.815 [2024-12-07 01:52:58.103994] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:52.815 [2024-12-07 01:52:58.104029] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.815 [2024-12-07 01:52:58.104259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:52.815 [2024-12-07 01:52:58.104397] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:52.815 [2024-12-07 01:52:58.104436] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:52.815 [2024-12-07 01:52:58.104565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.815 pt3 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.815 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.815 "name": "raid_bdev1", 00:08:52.815 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:52.815 "strip_size_kb": 0, 00:08:52.815 "state": "online", 00:08:52.815 "raid_level": "raid1", 00:08:52.815 "superblock": true, 00:08:52.815 "num_base_bdevs": 3, 00:08:52.815 "num_base_bdevs_discovered": 3, 00:08:52.815 "num_base_bdevs_operational": 3, 00:08:52.815 "base_bdevs_list": [ 00:08:52.815 { 00:08:52.815 "name": "pt1", 00:08:52.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.815 "is_configured": true, 00:08:52.815 "data_offset": 2048, 00:08:52.815 "data_size": 63488 00:08:52.815 }, 00:08:52.815 { 00:08:52.815 "name": "pt2", 00:08:52.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.815 "is_configured": true, 00:08:52.815 "data_offset": 2048, 00:08:52.815 "data_size": 63488 00:08:52.815 }, 00:08:52.815 { 00:08:52.815 "name": "pt3", 00:08:52.816 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:52.816 "is_configured": true, 00:08:52.816 "data_offset": 2048, 00:08:52.816 "data_size": 63488 00:08:52.816 } 00:08:52.816 ] 00:08:52.816 }' 00:08:52.816 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.816 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.076 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.076 [2024-12-07 01:52:58.518977] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.335 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.335 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.335 "name": "raid_bdev1", 00:08:53.335 "aliases": [ 00:08:53.335 "3b04e7c5-749d-49cf-b99a-250f058692f0" 00:08:53.335 ], 00:08:53.335 "product_name": "Raid Volume", 00:08:53.335 "block_size": 512, 00:08:53.335 "num_blocks": 63488, 00:08:53.335 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:53.335 "assigned_rate_limits": { 00:08:53.335 "rw_ios_per_sec": 0, 00:08:53.335 "rw_mbytes_per_sec": 0, 00:08:53.335 "r_mbytes_per_sec": 0, 00:08:53.335 "w_mbytes_per_sec": 0 00:08:53.335 }, 00:08:53.335 "claimed": false, 00:08:53.335 "zoned": false, 00:08:53.335 "supported_io_types": { 00:08:53.335 "read": true, 00:08:53.335 "write": true, 00:08:53.335 "unmap": false, 00:08:53.335 "flush": false, 00:08:53.335 "reset": true, 00:08:53.335 "nvme_admin": false, 00:08:53.335 "nvme_io": false, 00:08:53.336 "nvme_io_md": false, 00:08:53.336 "write_zeroes": true, 00:08:53.336 "zcopy": false, 00:08:53.336 "get_zone_info": false, 00:08:53.336 "zone_management": false, 00:08:53.336 "zone_append": false, 00:08:53.336 "compare": false, 00:08:53.336 "compare_and_write": false, 00:08:53.336 "abort": false, 00:08:53.336 "seek_hole": false, 00:08:53.336 "seek_data": false, 00:08:53.336 "copy": false, 00:08:53.336 "nvme_iov_md": false 00:08:53.336 }, 00:08:53.336 "memory_domains": [ 00:08:53.336 { 00:08:53.336 "dma_device_id": "system", 00:08:53.336 "dma_device_type": 1 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.336 "dma_device_type": 2 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "dma_device_id": "system", 00:08:53.336 "dma_device_type": 1 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.336 "dma_device_type": 2 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "dma_device_id": "system", 00:08:53.336 "dma_device_type": 1 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.336 "dma_device_type": 2 00:08:53.336 } 00:08:53.336 ], 00:08:53.336 "driver_specific": { 00:08:53.336 "raid": { 00:08:53.336 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:53.336 "strip_size_kb": 0, 00:08:53.336 "state": "online", 00:08:53.336 "raid_level": "raid1", 00:08:53.336 "superblock": true, 00:08:53.336 "num_base_bdevs": 3, 00:08:53.336 "num_base_bdevs_discovered": 3, 00:08:53.336 "num_base_bdevs_operational": 3, 00:08:53.336 "base_bdevs_list": [ 00:08:53.336 { 00:08:53.336 "name": "pt1", 00:08:53.336 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.336 "is_configured": true, 00:08:53.336 "data_offset": 2048, 00:08:53.336 "data_size": 63488 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "name": "pt2", 00:08:53.336 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.336 "is_configured": true, 00:08:53.336 "data_offset": 2048, 00:08:53.336 "data_size": 63488 00:08:53.336 }, 00:08:53.336 { 00:08:53.336 "name": "pt3", 00:08:53.336 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.336 "is_configured": true, 00:08:53.336 "data_offset": 2048, 00:08:53.336 "data_size": 63488 00:08:53.336 } 00:08:53.336 ] 00:08:53.336 } 00:08:53.336 } 00:08:53.336 }' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.336 pt2 00:08:53.336 pt3' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.336 [2024-12-07 01:52:58.774458] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.336 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3b04e7c5-749d-49cf-b99a-250f058692f0 '!=' 3b04e7c5-749d-49cf-b99a-250f058692f0 ']' 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.596 [2024-12-07 01:52:58.826175] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.596 "name": "raid_bdev1", 00:08:53.596 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:53.596 "strip_size_kb": 0, 00:08:53.596 "state": "online", 00:08:53.596 "raid_level": "raid1", 00:08:53.596 "superblock": true, 00:08:53.596 "num_base_bdevs": 3, 00:08:53.596 "num_base_bdevs_discovered": 2, 00:08:53.596 "num_base_bdevs_operational": 2, 00:08:53.596 "base_bdevs_list": [ 00:08:53.596 { 00:08:53.596 "name": null, 00:08:53.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.596 "is_configured": false, 00:08:53.596 "data_offset": 0, 00:08:53.596 "data_size": 63488 00:08:53.596 }, 00:08:53.596 { 00:08:53.596 "name": "pt2", 00:08:53.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.596 "is_configured": true, 00:08:53.596 "data_offset": 2048, 00:08:53.596 "data_size": 63488 00:08:53.596 }, 00:08:53.596 { 00:08:53.596 "name": "pt3", 00:08:53.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:53.596 "is_configured": true, 00:08:53.596 "data_offset": 2048, 00:08:53.596 "data_size": 63488 00:08:53.596 } 00:08:53.596 ] 00:08:53.596 }' 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.596 01:52:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.858 [2024-12-07 01:52:59.285351] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.858 [2024-12-07 01:52:59.285384] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.858 [2024-12-07 01:52:59.285453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.858 [2024-12-07 01:52:59.285510] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.858 [2024-12-07 01:52:59.285518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.858 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.119 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:54.119 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:54.119 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:54.119 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.120 [2024-12-07 01:52:59.369191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.120 [2024-12-07 01:52:59.369287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.120 [2024-12-07 01:52:59.369331] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:54.120 [2024-12-07 01:52:59.369361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.120 [2024-12-07 01:52:59.371530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.120 [2024-12-07 01:52:59.371566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.120 [2024-12-07 01:52:59.371638] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:54.120 [2024-12-07 01:52:59.371681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.120 pt2 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.120 "name": "raid_bdev1", 00:08:54.120 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:54.120 "strip_size_kb": 0, 00:08:54.120 "state": "configuring", 00:08:54.120 "raid_level": "raid1", 00:08:54.120 "superblock": true, 00:08:54.120 "num_base_bdevs": 3, 00:08:54.120 "num_base_bdevs_discovered": 1, 00:08:54.120 "num_base_bdevs_operational": 2, 00:08:54.120 "base_bdevs_list": [ 00:08:54.120 { 00:08:54.120 "name": null, 00:08:54.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.120 "is_configured": false, 00:08:54.120 "data_offset": 2048, 00:08:54.120 "data_size": 63488 00:08:54.120 }, 00:08:54.120 { 00:08:54.120 "name": "pt2", 00:08:54.120 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.120 "is_configured": true, 00:08:54.120 "data_offset": 2048, 00:08:54.120 "data_size": 63488 00:08:54.120 }, 00:08:54.120 { 00:08:54.120 "name": null, 00:08:54.120 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.120 "is_configured": false, 00:08:54.120 "data_offset": 2048, 00:08:54.120 "data_size": 63488 00:08:54.120 } 00:08:54.120 ] 00:08:54.120 }' 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.120 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.380 [2024-12-07 01:52:59.792501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:54.380 [2024-12-07 01:52:59.792607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.380 [2024-12-07 01:52:59.792648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:54.380 [2024-12-07 01:52:59.792687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.380 [2024-12-07 01:52:59.793105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.380 [2024-12-07 01:52:59.793167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:54.380 [2024-12-07 01:52:59.793280] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:54.380 [2024-12-07 01:52:59.793339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:54.380 [2024-12-07 01:52:59.793471] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:54.380 [2024-12-07 01:52:59.793506] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.380 [2024-12-07 01:52:59.793785] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:54.380 [2024-12-07 01:52:59.793948] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:54.380 [2024-12-07 01:52:59.793991] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:54.380 [2024-12-07 01:52:59.794133] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.380 pt3 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.380 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.640 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.640 "name": "raid_bdev1", 00:08:54.640 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:54.640 "strip_size_kb": 0, 00:08:54.640 "state": "online", 00:08:54.640 "raid_level": "raid1", 00:08:54.640 "superblock": true, 00:08:54.640 "num_base_bdevs": 3, 00:08:54.640 "num_base_bdevs_discovered": 2, 00:08:54.640 "num_base_bdevs_operational": 2, 00:08:54.640 "base_bdevs_list": [ 00:08:54.640 { 00:08:54.640 "name": null, 00:08:54.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.640 "is_configured": false, 00:08:54.640 "data_offset": 2048, 00:08:54.640 "data_size": 63488 00:08:54.640 }, 00:08:54.640 { 00:08:54.640 "name": "pt2", 00:08:54.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.640 "is_configured": true, 00:08:54.640 "data_offset": 2048, 00:08:54.640 "data_size": 63488 00:08:54.640 }, 00:08:54.640 { 00:08:54.640 "name": "pt3", 00:08:54.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.640 "is_configured": true, 00:08:54.640 "data_offset": 2048, 00:08:54.640 "data_size": 63488 00:08:54.640 } 00:08:54.640 ] 00:08:54.640 }' 00:08:54.640 01:52:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.640 01:52:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.900 [2024-12-07 01:53:00.199786] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.900 [2024-12-07 01:53:00.199849] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.900 [2024-12-07 01:53:00.199935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.900 [2024-12-07 01:53:00.200005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.900 [2024-12-07 01:53:00.200049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.900 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.901 [2024-12-07 01:53:00.263700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.901 [2024-12-07 01:53:00.263783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.901 [2024-12-07 01:53:00.263814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:54.901 [2024-12-07 01:53:00.263842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.901 [2024-12-07 01:53:00.265918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.901 [2024-12-07 01:53:00.266002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.901 [2024-12-07 01:53:00.266090] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:54.901 [2024-12-07 01:53:00.266156] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.901 [2024-12-07 01:53:00.266270] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:54.901 [2024-12-07 01:53:00.266324] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.901 [2024-12-07 01:53:00.266359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:54.901 [2024-12-07 01:53:00.266440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.901 pt1 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.901 "name": "raid_bdev1", 00:08:54.901 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:54.901 "strip_size_kb": 0, 00:08:54.901 "state": "configuring", 00:08:54.901 "raid_level": "raid1", 00:08:54.901 "superblock": true, 00:08:54.901 "num_base_bdevs": 3, 00:08:54.901 "num_base_bdevs_discovered": 1, 00:08:54.901 "num_base_bdevs_operational": 2, 00:08:54.901 "base_bdevs_list": [ 00:08:54.901 { 00:08:54.901 "name": null, 00:08:54.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.901 "is_configured": false, 00:08:54.901 "data_offset": 2048, 00:08:54.901 "data_size": 63488 00:08:54.901 }, 00:08:54.901 { 00:08:54.901 "name": "pt2", 00:08:54.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.901 "is_configured": true, 00:08:54.901 "data_offset": 2048, 00:08:54.901 "data_size": 63488 00:08:54.901 }, 00:08:54.901 { 00:08:54.901 "name": null, 00:08:54.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:54.901 "is_configured": false, 00:08:54.901 "data_offset": 2048, 00:08:54.901 "data_size": 63488 00:08:54.901 } 00:08:54.901 ] 00:08:54.901 }' 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.901 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 [2024-12-07 01:53:00.762830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.521 [2024-12-07 01:53:00.762925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.521 [2024-12-07 01:53:00.762960] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:55.521 [2024-12-07 01:53:00.762990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.521 [2024-12-07 01:53:00.763411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.521 [2024-12-07 01:53:00.763472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.521 [2024-12-07 01:53:00.763574] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:55.521 [2024-12-07 01:53:00.763626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.521 [2024-12-07 01:53:00.763746] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:55.521 [2024-12-07 01:53:00.763787] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.521 [2024-12-07 01:53:00.764030] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:08:55.521 [2024-12-07 01:53:00.764194] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:55.521 [2024-12-07 01:53:00.764234] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:55.521 [2024-12-07 01:53:00.764373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.521 pt3 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.521 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.521 "name": "raid_bdev1", 00:08:55.522 "uuid": "3b04e7c5-749d-49cf-b99a-250f058692f0", 00:08:55.522 "strip_size_kb": 0, 00:08:55.522 "state": "online", 00:08:55.522 "raid_level": "raid1", 00:08:55.522 "superblock": true, 00:08:55.522 "num_base_bdevs": 3, 00:08:55.522 "num_base_bdevs_discovered": 2, 00:08:55.522 "num_base_bdevs_operational": 2, 00:08:55.522 "base_bdevs_list": [ 00:08:55.522 { 00:08:55.522 "name": null, 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.522 "is_configured": false, 00:08:55.522 "data_offset": 2048, 00:08:55.522 "data_size": 63488 00:08:55.522 }, 00:08:55.522 { 00:08:55.522 "name": "pt2", 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.522 "is_configured": true, 00:08:55.522 "data_offset": 2048, 00:08:55.522 "data_size": 63488 00:08:55.522 }, 00:08:55.522 { 00:08:55.522 "name": "pt3", 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.522 "is_configured": true, 00:08:55.522 "data_offset": 2048, 00:08:55.522 "data_size": 63488 00:08:55.522 } 00:08:55.522 ] 00:08:55.522 }' 00:08:55.522 01:53:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.522 01:53:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.781 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:55.781 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.781 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.781 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:55.781 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:56.041 [2024-12-07 01:53:01.262243] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3b04e7c5-749d-49cf-b99a-250f058692f0 '!=' 3b04e7c5-749d-49cf-b99a-250f058692f0 ']' 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79392 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 79392 ']' 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 79392 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79392 00:08:56.041 killing process with pid 79392 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79392' 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 79392 00:08:56.041 [2024-12-07 01:53:01.348980] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:56.041 [2024-12-07 01:53:01.349057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.041 [2024-12-07 01:53:01.349117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.041 [2024-12-07 01:53:01.349125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:56.041 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 79392 00:08:56.041 [2024-12-07 01:53:01.381783] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.301 01:53:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:56.301 ************************************ 00:08:56.301 END TEST raid_superblock_test 00:08:56.301 ************************************ 00:08:56.301 00:08:56.301 real 0m6.461s 00:08:56.301 user 0m10.881s 00:08:56.301 sys 0m1.263s 00:08:56.301 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.301 01:53:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.301 01:53:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:56.301 01:53:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:56.301 01:53:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.301 01:53:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.301 ************************************ 00:08:56.301 START TEST raid_read_error_test 00:08:56.301 ************************************ 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Dh4V02Xq5K 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79821 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79821 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 79821 ']' 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.302 01:53:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.562 [2024-12-07 01:53:01.786993] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:56.562 [2024-12-07 01:53:01.787141] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79821 ] 00:08:56.562 [2024-12-07 01:53:01.913739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.562 [2024-12-07 01:53:01.956911] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.562 [2024-12-07 01:53:01.997820] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.562 [2024-12-07 01:53:01.997935] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 BaseBdev1_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 true 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 [2024-12-07 01:53:02.651293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.502 [2024-12-07 01:53:02.651422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.502 [2024-12-07 01:53:02.651462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:57.502 [2024-12-07 01:53:02.651492] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.502 [2024-12-07 01:53:02.653572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.502 [2024-12-07 01:53:02.653655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.502 BaseBdev1 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 BaseBdev2_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 true 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 [2024-12-07 01:53:02.699959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:57.502 [2024-12-07 01:53:02.700012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.502 [2024-12-07 01:53:02.700032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:57.502 [2024-12-07 01:53:02.700041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.502 [2024-12-07 01:53:02.702060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.502 [2024-12-07 01:53:02.702094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:57.502 BaseBdev2 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 BaseBdev3_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 true 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 [2024-12-07 01:53:02.740191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:57.502 [2024-12-07 01:53:02.740300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.502 [2024-12-07 01:53:02.740335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:57.502 [2024-12-07 01:53:02.740345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.502 [2024-12-07 01:53:02.742345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.502 [2024-12-07 01:53:02.742379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:57.502 BaseBdev3 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.502 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.502 [2024-12-07 01:53:02.752258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.502 [2024-12-07 01:53:02.754088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.502 [2024-12-07 01:53:02.754195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.502 [2024-12-07 01:53:02.754383] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:57.502 [2024-12-07 01:53:02.754430] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.503 [2024-12-07 01:53:02.754690] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:57.503 [2024-12-07 01:53:02.754869] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:57.503 [2024-12-07 01:53:02.754914] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:57.503 [2024-12-07 01:53:02.755086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.503 "name": "raid_bdev1", 00:08:57.503 "uuid": "68722fa7-83f3-49b3-a7e3-0872631ed9d9", 00:08:57.503 "strip_size_kb": 0, 00:08:57.503 "state": "online", 00:08:57.503 "raid_level": "raid1", 00:08:57.503 "superblock": true, 00:08:57.503 "num_base_bdevs": 3, 00:08:57.503 "num_base_bdevs_discovered": 3, 00:08:57.503 "num_base_bdevs_operational": 3, 00:08:57.503 "base_bdevs_list": [ 00:08:57.503 { 00:08:57.503 "name": "BaseBdev1", 00:08:57.503 "uuid": "1112b916-ff43-5ba5-ade9-d0bf0955486d", 00:08:57.503 "is_configured": true, 00:08:57.503 "data_offset": 2048, 00:08:57.503 "data_size": 63488 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "name": "BaseBdev2", 00:08:57.503 "uuid": "da8d4a5a-1b12-5eae-9f25-ad9b28eab8a6", 00:08:57.503 "is_configured": true, 00:08:57.503 "data_offset": 2048, 00:08:57.503 "data_size": 63488 00:08:57.503 }, 00:08:57.503 { 00:08:57.503 "name": "BaseBdev3", 00:08:57.503 "uuid": "cf145a86-e6e1-58b7-8d90-1e06132c9e86", 00:08:57.503 "is_configured": true, 00:08:57.503 "data_offset": 2048, 00:08:57.503 "data_size": 63488 00:08:57.503 } 00:08:57.503 ] 00:08:57.503 }' 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.503 01:53:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.762 01:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:57.762 01:53:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.022 [2024-12-07 01:53:03.259774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.960 "name": "raid_bdev1", 00:08:58.960 "uuid": "68722fa7-83f3-49b3-a7e3-0872631ed9d9", 00:08:58.960 "strip_size_kb": 0, 00:08:58.960 "state": "online", 00:08:58.960 "raid_level": "raid1", 00:08:58.960 "superblock": true, 00:08:58.960 "num_base_bdevs": 3, 00:08:58.960 "num_base_bdevs_discovered": 3, 00:08:58.960 "num_base_bdevs_operational": 3, 00:08:58.960 "base_bdevs_list": [ 00:08:58.960 { 00:08:58.960 "name": "BaseBdev1", 00:08:58.960 "uuid": "1112b916-ff43-5ba5-ade9-d0bf0955486d", 00:08:58.960 "is_configured": true, 00:08:58.960 "data_offset": 2048, 00:08:58.960 "data_size": 63488 00:08:58.960 }, 00:08:58.960 { 00:08:58.960 "name": "BaseBdev2", 00:08:58.960 "uuid": "da8d4a5a-1b12-5eae-9f25-ad9b28eab8a6", 00:08:58.960 "is_configured": true, 00:08:58.960 "data_offset": 2048, 00:08:58.960 "data_size": 63488 00:08:58.960 }, 00:08:58.960 { 00:08:58.960 "name": "BaseBdev3", 00:08:58.960 "uuid": "cf145a86-e6e1-58b7-8d90-1e06132c9e86", 00:08:58.960 "is_configured": true, 00:08:58.960 "data_offset": 2048, 00:08:58.960 "data_size": 63488 00:08:58.960 } 00:08:58.960 ] 00:08:58.960 }' 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.960 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.220 [2024-12-07 01:53:04.662933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.220 [2024-12-07 01:53:04.663023] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.220 [2024-12-07 01:53:04.665565] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.220 [2024-12-07 01:53:04.665650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.220 [2024-12-07 01:53:04.665780] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.220 [2024-12-07 01:53:04.665861] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:59.220 { 00:08:59.220 "results": [ 00:08:59.220 { 00:08:59.220 "job": "raid_bdev1", 00:08:59.220 "core_mask": "0x1", 00:08:59.220 "workload": "randrw", 00:08:59.220 "percentage": 50, 00:08:59.220 "status": "finished", 00:08:59.220 "queue_depth": 1, 00:08:59.220 "io_size": 131072, 00:08:59.220 "runtime": 1.404081, 00:08:59.220 "iops": 14779.774101351702, 00:08:59.220 "mibps": 1847.4717626689628, 00:08:59.220 "io_failed": 0, 00:08:59.220 "io_timeout": 0, 00:08:59.220 "avg_latency_us": 65.14879550726735, 00:08:59.220 "min_latency_us": 21.799126637554586, 00:08:59.220 "max_latency_us": 1438.071615720524 00:08:59.220 } 00:08:59.220 ], 00:08:59.220 "core_count": 1 00:08:59.220 } 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79821 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 79821 ']' 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 79821 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.220 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79821 00:08:59.480 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.480 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.480 killing process with pid 79821 00:08:59.480 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79821' 00:08:59.480 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 79821 00:08:59.480 [2024-12-07 01:53:04.709173] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.480 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 79821 00:08:59.480 [2024-12-07 01:53:04.733978] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Dh4V02Xq5K 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:59.740 ************************************ 00:08:59.740 END TEST raid_read_error_test 00:08:59.740 ************************************ 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:59.740 00:08:59.740 real 0m3.282s 00:08:59.740 user 0m4.152s 00:08:59.740 sys 0m0.510s 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.740 01:53:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.740 01:53:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:59.740 01:53:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:59.740 01:53:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.740 01:53:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.740 ************************************ 00:08:59.740 START TEST raid_write_error_test 00:08:59.740 ************************************ 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:59.740 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.uXxhBV2pCP 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79956 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79956 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 79956 ']' 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:59.741 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.741 [2024-12-07 01:53:05.134577] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:59.741 [2024-12-07 01:53:05.134708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79956 ] 00:09:00.000 [2024-12-07 01:53:05.279256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.000 [2024-12-07 01:53:05.323388] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.000 [2024-12-07 01:53:05.364917] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.000 [2024-12-07 01:53:05.364949] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.568 BaseBdev1_malloc 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.568 true 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.568 01:53:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.568 [2024-12-07 01:53:06.002352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:00.568 [2024-12-07 01:53:06.002451] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.568 [2024-12-07 01:53:06.002498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:00.568 [2024-12-07 01:53:06.002529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.568 [2024-12-07 01:53:06.004724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.568 [2024-12-07 01:53:06.004789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:00.568 BaseBdev1 00:09:00.568 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.568 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.568 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:00.568 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.568 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.827 BaseBdev2_malloc 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.827 true 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.827 [2024-12-07 01:53:06.050534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.827 [2024-12-07 01:53:06.050628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.827 [2024-12-07 01:53:06.050673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:00.827 [2024-12-07 01:53:06.050700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.827 [2024-12-07 01:53:06.052811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.827 [2024-12-07 01:53:06.052875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.827 BaseBdev2 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.827 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 BaseBdev3_malloc 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 true 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 [2024-12-07 01:53:06.090791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:00.828 [2024-12-07 01:53:06.090870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.828 [2024-12-07 01:53:06.090905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:00.828 [2024-12-07 01:53:06.090931] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.828 [2024-12-07 01:53:06.092948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.828 [2024-12-07 01:53:06.093011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:00.828 BaseBdev3 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 [2024-12-07 01:53:06.102846] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.828 [2024-12-07 01:53:06.104783] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.828 [2024-12-07 01:53:06.104888] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.828 [2024-12-07 01:53:06.105057] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:00.828 [2024-12-07 01:53:06.105072] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:00.828 [2024-12-07 01:53:06.105308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:09:00.828 [2024-12-07 01:53:06.105440] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:00.828 [2024-12-07 01:53:06.105449] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:09:00.828 [2024-12-07 01:53:06.105568] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.828 "name": "raid_bdev1", 00:09:00.828 "uuid": "ba77c576-5a56-4903-a798-0ac6db96bbe4", 00:09:00.828 "strip_size_kb": 0, 00:09:00.828 "state": "online", 00:09:00.828 "raid_level": "raid1", 00:09:00.828 "superblock": true, 00:09:00.828 "num_base_bdevs": 3, 00:09:00.828 "num_base_bdevs_discovered": 3, 00:09:00.828 "num_base_bdevs_operational": 3, 00:09:00.828 "base_bdevs_list": [ 00:09:00.828 { 00:09:00.828 "name": "BaseBdev1", 00:09:00.828 "uuid": "8261e79a-1cb9-5b16-8f76-44dda7c17a71", 00:09:00.828 "is_configured": true, 00:09:00.828 "data_offset": 2048, 00:09:00.828 "data_size": 63488 00:09:00.828 }, 00:09:00.828 { 00:09:00.828 "name": "BaseBdev2", 00:09:00.828 "uuid": "bb1de57b-5fdb-5c7f-b91a-8e342b5825e9", 00:09:00.828 "is_configured": true, 00:09:00.828 "data_offset": 2048, 00:09:00.828 "data_size": 63488 00:09:00.828 }, 00:09:00.828 { 00:09:00.828 "name": "BaseBdev3", 00:09:00.828 "uuid": "43d5c137-4fee-5f6c-84ff-3b6e9cef5e44", 00:09:00.828 "is_configured": true, 00:09:00.828 "data_offset": 2048, 00:09:00.828 "data_size": 63488 00:09:00.828 } 00:09:00.828 ] 00:09:00.828 }' 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.828 01:53:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:01.398 01:53:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:01.398 [2024-12-07 01:53:06.646280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.339 [2024-12-07 01:53:07.564817] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:02.339 [2024-12-07 01:53:07.564943] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:02.339 [2024-12-07 01:53:07.565205] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002600 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.339 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.340 "name": "raid_bdev1", 00:09:02.340 "uuid": "ba77c576-5a56-4903-a798-0ac6db96bbe4", 00:09:02.340 "strip_size_kb": 0, 00:09:02.340 "state": "online", 00:09:02.340 "raid_level": "raid1", 00:09:02.340 "superblock": true, 00:09:02.340 "num_base_bdevs": 3, 00:09:02.340 "num_base_bdevs_discovered": 2, 00:09:02.340 "num_base_bdevs_operational": 2, 00:09:02.340 "base_bdevs_list": [ 00:09:02.340 { 00:09:02.340 "name": null, 00:09:02.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.340 "is_configured": false, 00:09:02.340 "data_offset": 0, 00:09:02.340 "data_size": 63488 00:09:02.340 }, 00:09:02.340 { 00:09:02.340 "name": "BaseBdev2", 00:09:02.340 "uuid": "bb1de57b-5fdb-5c7f-b91a-8e342b5825e9", 00:09:02.340 "is_configured": true, 00:09:02.340 "data_offset": 2048, 00:09:02.340 "data_size": 63488 00:09:02.340 }, 00:09:02.340 { 00:09:02.340 "name": "BaseBdev3", 00:09:02.340 "uuid": "43d5c137-4fee-5f6c-84ff-3b6e9cef5e44", 00:09:02.340 "is_configured": true, 00:09:02.340 "data_offset": 2048, 00:09:02.340 "data_size": 63488 00:09:02.340 } 00:09:02.340 ] 00:09:02.340 }' 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.340 01:53:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.600 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.601 [2024-12-07 01:53:08.031060] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.601 [2024-12-07 01:53:08.031146] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.601 [2024-12-07 01:53:08.033633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.601 [2024-12-07 01:53:08.033730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.601 { 00:09:02.601 "results": [ 00:09:02.601 { 00:09:02.601 "job": "raid_bdev1", 00:09:02.601 "core_mask": "0x1", 00:09:02.601 "workload": "randrw", 00:09:02.601 "percentage": 50, 00:09:02.601 "status": "finished", 00:09:02.601 "queue_depth": 1, 00:09:02.601 "io_size": 131072, 00:09:02.601 "runtime": 1.385639, 00:09:02.601 "iops": 16588.014627186447, 00:09:02.601 "mibps": 2073.501828398306, 00:09:02.601 "io_failed": 0, 00:09:02.601 "io_timeout": 0, 00:09:02.601 "avg_latency_us": 57.784308239757664, 00:09:02.601 "min_latency_us": 21.799126637554586, 00:09:02.601 "max_latency_us": 1395.1441048034935 00:09:02.601 } 00:09:02.601 ], 00:09:02.601 "core_count": 1 00:09:02.601 } 00:09:02.601 [2024-12-07 01:53:08.033833] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.601 [2024-12-07 01:53:08.033847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79956 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 79956 ']' 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 79956 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:02.601 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79956 00:09:02.861 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:02.861 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:02.861 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79956' 00:09:02.861 killing process with pid 79956 00:09:02.861 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 79956 00:09:02.861 [2024-12-07 01:53:08.083414] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.861 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 79956 00:09:02.861 [2024-12-07 01:53:08.108979] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.uXxhBV2pCP 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:03.121 00:09:03.121 real 0m3.313s 00:09:03.121 user 0m4.217s 00:09:03.121 sys 0m0.520s 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:03.121 01:53:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.121 ************************************ 00:09:03.121 END TEST raid_write_error_test 00:09:03.121 ************************************ 00:09:03.121 01:53:08 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:03.121 01:53:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:03.121 01:53:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:03.121 01:53:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:03.121 01:53:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.121 01:53:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:03.121 ************************************ 00:09:03.121 START TEST raid_state_function_test 00:09:03.121 ************************************ 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80083 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80083' 00:09:03.121 Process raid pid: 80083 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80083 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80083 ']' 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:03.121 01:53:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.121 [2024-12-07 01:53:08.514030] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:03.121 [2024-12-07 01:53:08.514248] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.381 [2024-12-07 01:53:08.639534] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.381 [2024-12-07 01:53:08.682738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.381 [2024-12-07 01:53:08.724071] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.381 [2024-12-07 01:53:08.724210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.951 [2024-12-07 01:53:09.352975] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.951 [2024-12-07 01:53:09.353071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.951 [2024-12-07 01:53:09.353096] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.951 [2024-12-07 01:53:09.353106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.951 [2024-12-07 01:53:09.353112] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.951 [2024-12-07 01:53:09.353125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.951 [2024-12-07 01:53:09.353131] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:03.951 [2024-12-07 01:53:09.353140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.951 "name": "Existed_Raid", 00:09:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.951 "strip_size_kb": 64, 00:09:03.951 "state": "configuring", 00:09:03.951 "raid_level": "raid0", 00:09:03.951 "superblock": false, 00:09:03.951 "num_base_bdevs": 4, 00:09:03.951 "num_base_bdevs_discovered": 0, 00:09:03.951 "num_base_bdevs_operational": 4, 00:09:03.951 "base_bdevs_list": [ 00:09:03.951 { 00:09:03.951 "name": "BaseBdev1", 00:09:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.951 "is_configured": false, 00:09:03.951 "data_offset": 0, 00:09:03.951 "data_size": 0 00:09:03.951 }, 00:09:03.951 { 00:09:03.951 "name": "BaseBdev2", 00:09:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.951 "is_configured": false, 00:09:03.951 "data_offset": 0, 00:09:03.951 "data_size": 0 00:09:03.951 }, 00:09:03.951 { 00:09:03.951 "name": "BaseBdev3", 00:09:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.951 "is_configured": false, 00:09:03.951 "data_offset": 0, 00:09:03.951 "data_size": 0 00:09:03.951 }, 00:09:03.951 { 00:09:03.951 "name": "BaseBdev4", 00:09:03.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.951 "is_configured": false, 00:09:03.951 "data_offset": 0, 00:09:03.951 "data_size": 0 00:09:03.951 } 00:09:03.951 ] 00:09:03.951 }' 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.951 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 [2024-12-07 01:53:09.816099] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:04.520 [2024-12-07 01:53:09.816193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 [2024-12-07 01:53:09.828092] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:04.520 [2024-12-07 01:53:09.828163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:04.520 [2024-12-07 01:53:09.828175] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:04.520 [2024-12-07 01:53:09.828184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:04.520 [2024-12-07 01:53:09.828190] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:04.520 [2024-12-07 01:53:09.828198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:04.520 [2024-12-07 01:53:09.828204] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:04.520 [2024-12-07 01:53:09.828212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 [2024-12-07 01:53:09.848589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:04.520 BaseBdev1 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.520 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.520 [ 00:09:04.520 { 00:09:04.520 "name": "BaseBdev1", 00:09:04.520 "aliases": [ 00:09:04.520 "bae1136d-f9a6-48c4-8f49-e3875d82a377" 00:09:04.520 ], 00:09:04.520 "product_name": "Malloc disk", 00:09:04.520 "block_size": 512, 00:09:04.521 "num_blocks": 65536, 00:09:04.521 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:04.521 "assigned_rate_limits": { 00:09:04.521 "rw_ios_per_sec": 0, 00:09:04.521 "rw_mbytes_per_sec": 0, 00:09:04.521 "r_mbytes_per_sec": 0, 00:09:04.521 "w_mbytes_per_sec": 0 00:09:04.521 }, 00:09:04.521 "claimed": true, 00:09:04.521 "claim_type": "exclusive_write", 00:09:04.521 "zoned": false, 00:09:04.521 "supported_io_types": { 00:09:04.521 "read": true, 00:09:04.521 "write": true, 00:09:04.521 "unmap": true, 00:09:04.521 "flush": true, 00:09:04.521 "reset": true, 00:09:04.521 "nvme_admin": false, 00:09:04.521 "nvme_io": false, 00:09:04.521 "nvme_io_md": false, 00:09:04.521 "write_zeroes": true, 00:09:04.521 "zcopy": true, 00:09:04.521 "get_zone_info": false, 00:09:04.521 "zone_management": false, 00:09:04.521 "zone_append": false, 00:09:04.521 "compare": false, 00:09:04.521 "compare_and_write": false, 00:09:04.521 "abort": true, 00:09:04.521 "seek_hole": false, 00:09:04.521 "seek_data": false, 00:09:04.521 "copy": true, 00:09:04.521 "nvme_iov_md": false 00:09:04.521 }, 00:09:04.521 "memory_domains": [ 00:09:04.521 { 00:09:04.521 "dma_device_id": "system", 00:09:04.521 "dma_device_type": 1 00:09:04.521 }, 00:09:04.521 { 00:09:04.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.521 "dma_device_type": 2 00:09:04.521 } 00:09:04.521 ], 00:09:04.521 "driver_specific": {} 00:09:04.521 } 00:09:04.521 ] 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.521 "name": "Existed_Raid", 00:09:04.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.521 "strip_size_kb": 64, 00:09:04.521 "state": "configuring", 00:09:04.521 "raid_level": "raid0", 00:09:04.521 "superblock": false, 00:09:04.521 "num_base_bdevs": 4, 00:09:04.521 "num_base_bdevs_discovered": 1, 00:09:04.521 "num_base_bdevs_operational": 4, 00:09:04.521 "base_bdevs_list": [ 00:09:04.521 { 00:09:04.521 "name": "BaseBdev1", 00:09:04.521 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:04.521 "is_configured": true, 00:09:04.521 "data_offset": 0, 00:09:04.521 "data_size": 65536 00:09:04.521 }, 00:09:04.521 { 00:09:04.521 "name": "BaseBdev2", 00:09:04.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.521 "is_configured": false, 00:09:04.521 "data_offset": 0, 00:09:04.521 "data_size": 0 00:09:04.521 }, 00:09:04.521 { 00:09:04.521 "name": "BaseBdev3", 00:09:04.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.521 "is_configured": false, 00:09:04.521 "data_offset": 0, 00:09:04.521 "data_size": 0 00:09:04.521 }, 00:09:04.521 { 00:09:04.521 "name": "BaseBdev4", 00:09:04.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.521 "is_configured": false, 00:09:04.521 "data_offset": 0, 00:09:04.521 "data_size": 0 00:09:04.521 } 00:09:04.521 ] 00:09:04.521 }' 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.521 01:53:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.089 [2024-12-07 01:53:10.327808] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.089 [2024-12-07 01:53:10.327891] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.089 [2024-12-07 01:53:10.339835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.089 [2024-12-07 01:53:10.341603] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:05.089 [2024-12-07 01:53:10.341643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:05.089 [2024-12-07 01:53:10.341653] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:05.089 [2024-12-07 01:53:10.341690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:05.089 [2024-12-07 01:53:10.341698] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:05.089 [2024-12-07 01:53:10.341706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.089 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.090 "name": "Existed_Raid", 00:09:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.090 "strip_size_kb": 64, 00:09:05.090 "state": "configuring", 00:09:05.090 "raid_level": "raid0", 00:09:05.090 "superblock": false, 00:09:05.090 "num_base_bdevs": 4, 00:09:05.090 "num_base_bdevs_discovered": 1, 00:09:05.090 "num_base_bdevs_operational": 4, 00:09:05.090 "base_bdevs_list": [ 00:09:05.090 { 00:09:05.090 "name": "BaseBdev1", 00:09:05.090 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:05.090 "is_configured": true, 00:09:05.090 "data_offset": 0, 00:09:05.090 "data_size": 65536 00:09:05.090 }, 00:09:05.090 { 00:09:05.090 "name": "BaseBdev2", 00:09:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.090 "is_configured": false, 00:09:05.090 "data_offset": 0, 00:09:05.090 "data_size": 0 00:09:05.090 }, 00:09:05.090 { 00:09:05.090 "name": "BaseBdev3", 00:09:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.090 "is_configured": false, 00:09:05.090 "data_offset": 0, 00:09:05.090 "data_size": 0 00:09:05.090 }, 00:09:05.090 { 00:09:05.090 "name": "BaseBdev4", 00:09:05.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.090 "is_configured": false, 00:09:05.090 "data_offset": 0, 00:09:05.090 "data_size": 0 00:09:05.090 } 00:09:05.090 ] 00:09:05.090 }' 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.090 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.349 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.349 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.349 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.608 [2024-12-07 01:53:10.811115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.608 BaseBdev2 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.608 [ 00:09:05.608 { 00:09:05.608 "name": "BaseBdev2", 00:09:05.608 "aliases": [ 00:09:05.608 "a321a40f-46aa-4505-ae3d-22b5e7b8d710" 00:09:05.608 ], 00:09:05.608 "product_name": "Malloc disk", 00:09:05.608 "block_size": 512, 00:09:05.608 "num_blocks": 65536, 00:09:05.608 "uuid": "a321a40f-46aa-4505-ae3d-22b5e7b8d710", 00:09:05.608 "assigned_rate_limits": { 00:09:05.608 "rw_ios_per_sec": 0, 00:09:05.608 "rw_mbytes_per_sec": 0, 00:09:05.608 "r_mbytes_per_sec": 0, 00:09:05.608 "w_mbytes_per_sec": 0 00:09:05.608 }, 00:09:05.608 "claimed": true, 00:09:05.608 "claim_type": "exclusive_write", 00:09:05.608 "zoned": false, 00:09:05.608 "supported_io_types": { 00:09:05.608 "read": true, 00:09:05.608 "write": true, 00:09:05.608 "unmap": true, 00:09:05.608 "flush": true, 00:09:05.608 "reset": true, 00:09:05.608 "nvme_admin": false, 00:09:05.608 "nvme_io": false, 00:09:05.608 "nvme_io_md": false, 00:09:05.608 "write_zeroes": true, 00:09:05.608 "zcopy": true, 00:09:05.608 "get_zone_info": false, 00:09:05.608 "zone_management": false, 00:09:05.608 "zone_append": false, 00:09:05.608 "compare": false, 00:09:05.608 "compare_and_write": false, 00:09:05.608 "abort": true, 00:09:05.608 "seek_hole": false, 00:09:05.608 "seek_data": false, 00:09:05.608 "copy": true, 00:09:05.608 "nvme_iov_md": false 00:09:05.608 }, 00:09:05.608 "memory_domains": [ 00:09:05.608 { 00:09:05.608 "dma_device_id": "system", 00:09:05.608 "dma_device_type": 1 00:09:05.608 }, 00:09:05.608 { 00:09:05.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.608 "dma_device_type": 2 00:09:05.608 } 00:09:05.608 ], 00:09:05.608 "driver_specific": {} 00:09:05.608 } 00:09:05.608 ] 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.608 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.608 "name": "Existed_Raid", 00:09:05.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.608 "strip_size_kb": 64, 00:09:05.608 "state": "configuring", 00:09:05.608 "raid_level": "raid0", 00:09:05.608 "superblock": false, 00:09:05.608 "num_base_bdevs": 4, 00:09:05.608 "num_base_bdevs_discovered": 2, 00:09:05.608 "num_base_bdevs_operational": 4, 00:09:05.608 "base_bdevs_list": [ 00:09:05.608 { 00:09:05.608 "name": "BaseBdev1", 00:09:05.608 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:05.608 "is_configured": true, 00:09:05.608 "data_offset": 0, 00:09:05.608 "data_size": 65536 00:09:05.608 }, 00:09:05.608 { 00:09:05.608 "name": "BaseBdev2", 00:09:05.608 "uuid": "a321a40f-46aa-4505-ae3d-22b5e7b8d710", 00:09:05.608 "is_configured": true, 00:09:05.608 "data_offset": 0, 00:09:05.608 "data_size": 65536 00:09:05.608 }, 00:09:05.608 { 00:09:05.608 "name": "BaseBdev3", 00:09:05.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.608 "is_configured": false, 00:09:05.608 "data_offset": 0, 00:09:05.609 "data_size": 0 00:09:05.609 }, 00:09:05.609 { 00:09:05.609 "name": "BaseBdev4", 00:09:05.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.609 "is_configured": false, 00:09:05.609 "data_offset": 0, 00:09:05.609 "data_size": 0 00:09:05.609 } 00:09:05.609 ] 00:09:05.609 }' 00:09:05.609 01:53:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.609 01:53:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.868 [2024-12-07 01:53:11.297134] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.868 BaseBdev3 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.868 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.868 [ 00:09:05.868 { 00:09:05.868 "name": "BaseBdev3", 00:09:05.868 "aliases": [ 00:09:05.868 "89258d6e-71d0-4425-b130-97eef64a4774" 00:09:05.868 ], 00:09:05.868 "product_name": "Malloc disk", 00:09:05.868 "block_size": 512, 00:09:05.868 "num_blocks": 65536, 00:09:05.868 "uuid": "89258d6e-71d0-4425-b130-97eef64a4774", 00:09:05.868 "assigned_rate_limits": { 00:09:05.868 "rw_ios_per_sec": 0, 00:09:05.868 "rw_mbytes_per_sec": 0, 00:09:05.868 "r_mbytes_per_sec": 0, 00:09:05.868 "w_mbytes_per_sec": 0 00:09:05.868 }, 00:09:05.868 "claimed": true, 00:09:05.868 "claim_type": "exclusive_write", 00:09:05.868 "zoned": false, 00:09:05.868 "supported_io_types": { 00:09:05.868 "read": true, 00:09:05.868 "write": true, 00:09:05.868 "unmap": true, 00:09:06.127 "flush": true, 00:09:06.127 "reset": true, 00:09:06.127 "nvme_admin": false, 00:09:06.127 "nvme_io": false, 00:09:06.127 "nvme_io_md": false, 00:09:06.127 "write_zeroes": true, 00:09:06.127 "zcopy": true, 00:09:06.127 "get_zone_info": false, 00:09:06.127 "zone_management": false, 00:09:06.127 "zone_append": false, 00:09:06.127 "compare": false, 00:09:06.127 "compare_and_write": false, 00:09:06.127 "abort": true, 00:09:06.127 "seek_hole": false, 00:09:06.127 "seek_data": false, 00:09:06.127 "copy": true, 00:09:06.127 "nvme_iov_md": false 00:09:06.127 }, 00:09:06.127 "memory_domains": [ 00:09:06.127 { 00:09:06.127 "dma_device_id": "system", 00:09:06.127 "dma_device_type": 1 00:09:06.127 }, 00:09:06.127 { 00:09:06.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.127 "dma_device_type": 2 00:09:06.127 } 00:09:06.127 ], 00:09:06.127 "driver_specific": {} 00:09:06.127 } 00:09:06.127 ] 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.127 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.128 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.128 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.128 "name": "Existed_Raid", 00:09:06.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.128 "strip_size_kb": 64, 00:09:06.128 "state": "configuring", 00:09:06.128 "raid_level": "raid0", 00:09:06.128 "superblock": false, 00:09:06.128 "num_base_bdevs": 4, 00:09:06.128 "num_base_bdevs_discovered": 3, 00:09:06.128 "num_base_bdevs_operational": 4, 00:09:06.128 "base_bdevs_list": [ 00:09:06.128 { 00:09:06.128 "name": "BaseBdev1", 00:09:06.128 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:06.128 "is_configured": true, 00:09:06.128 "data_offset": 0, 00:09:06.128 "data_size": 65536 00:09:06.128 }, 00:09:06.128 { 00:09:06.128 "name": "BaseBdev2", 00:09:06.128 "uuid": "a321a40f-46aa-4505-ae3d-22b5e7b8d710", 00:09:06.128 "is_configured": true, 00:09:06.128 "data_offset": 0, 00:09:06.128 "data_size": 65536 00:09:06.128 }, 00:09:06.128 { 00:09:06.128 "name": "BaseBdev3", 00:09:06.128 "uuid": "89258d6e-71d0-4425-b130-97eef64a4774", 00:09:06.128 "is_configured": true, 00:09:06.128 "data_offset": 0, 00:09:06.128 "data_size": 65536 00:09:06.128 }, 00:09:06.128 { 00:09:06.128 "name": "BaseBdev4", 00:09:06.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.128 "is_configured": false, 00:09:06.128 "data_offset": 0, 00:09:06.128 "data_size": 0 00:09:06.128 } 00:09:06.128 ] 00:09:06.128 }' 00:09:06.128 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.128 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.387 [2024-12-07 01:53:11.743206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:06.387 [2024-12-07 01:53:11.743310] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:06.387 [2024-12-07 01:53:11.743336] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:06.387 [2024-12-07 01:53:11.743641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:06.387 [2024-12-07 01:53:11.743862] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:06.387 [2024-12-07 01:53:11.743916] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:06.387 [2024-12-07 01:53:11.744165] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.387 BaseBdev4 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.387 [ 00:09:06.387 { 00:09:06.387 "name": "BaseBdev4", 00:09:06.387 "aliases": [ 00:09:06.387 "5c4586bd-a196-44b8-8433-e1f2dd2a45c4" 00:09:06.387 ], 00:09:06.387 "product_name": "Malloc disk", 00:09:06.387 "block_size": 512, 00:09:06.387 "num_blocks": 65536, 00:09:06.387 "uuid": "5c4586bd-a196-44b8-8433-e1f2dd2a45c4", 00:09:06.387 "assigned_rate_limits": { 00:09:06.387 "rw_ios_per_sec": 0, 00:09:06.387 "rw_mbytes_per_sec": 0, 00:09:06.387 "r_mbytes_per_sec": 0, 00:09:06.387 "w_mbytes_per_sec": 0 00:09:06.387 }, 00:09:06.387 "claimed": true, 00:09:06.387 "claim_type": "exclusive_write", 00:09:06.387 "zoned": false, 00:09:06.387 "supported_io_types": { 00:09:06.387 "read": true, 00:09:06.387 "write": true, 00:09:06.387 "unmap": true, 00:09:06.387 "flush": true, 00:09:06.387 "reset": true, 00:09:06.387 "nvme_admin": false, 00:09:06.387 "nvme_io": false, 00:09:06.387 "nvme_io_md": false, 00:09:06.387 "write_zeroes": true, 00:09:06.387 "zcopy": true, 00:09:06.387 "get_zone_info": false, 00:09:06.387 "zone_management": false, 00:09:06.387 "zone_append": false, 00:09:06.387 "compare": false, 00:09:06.387 "compare_and_write": false, 00:09:06.387 "abort": true, 00:09:06.387 "seek_hole": false, 00:09:06.387 "seek_data": false, 00:09:06.387 "copy": true, 00:09:06.387 "nvme_iov_md": false 00:09:06.387 }, 00:09:06.387 "memory_domains": [ 00:09:06.387 { 00:09:06.387 "dma_device_id": "system", 00:09:06.387 "dma_device_type": 1 00:09:06.387 }, 00:09:06.387 { 00:09:06.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.387 "dma_device_type": 2 00:09:06.387 } 00:09:06.387 ], 00:09:06.387 "driver_specific": {} 00:09:06.387 } 00:09:06.387 ] 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.387 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.387 "name": "Existed_Raid", 00:09:06.387 "uuid": "a10b5057-be0d-4ce0-a545-e31905bf9e34", 00:09:06.387 "strip_size_kb": 64, 00:09:06.387 "state": "online", 00:09:06.387 "raid_level": "raid0", 00:09:06.387 "superblock": false, 00:09:06.387 "num_base_bdevs": 4, 00:09:06.387 "num_base_bdevs_discovered": 4, 00:09:06.387 "num_base_bdevs_operational": 4, 00:09:06.387 "base_bdevs_list": [ 00:09:06.387 { 00:09:06.387 "name": "BaseBdev1", 00:09:06.387 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:06.387 "is_configured": true, 00:09:06.387 "data_offset": 0, 00:09:06.387 "data_size": 65536 00:09:06.387 }, 00:09:06.387 { 00:09:06.387 "name": "BaseBdev2", 00:09:06.387 "uuid": "a321a40f-46aa-4505-ae3d-22b5e7b8d710", 00:09:06.387 "is_configured": true, 00:09:06.387 "data_offset": 0, 00:09:06.387 "data_size": 65536 00:09:06.387 }, 00:09:06.387 { 00:09:06.387 "name": "BaseBdev3", 00:09:06.387 "uuid": "89258d6e-71d0-4425-b130-97eef64a4774", 00:09:06.387 "is_configured": true, 00:09:06.387 "data_offset": 0, 00:09:06.387 "data_size": 65536 00:09:06.387 }, 00:09:06.387 { 00:09:06.388 "name": "BaseBdev4", 00:09:06.388 "uuid": "5c4586bd-a196-44b8-8433-e1f2dd2a45c4", 00:09:06.388 "is_configured": true, 00:09:06.388 "data_offset": 0, 00:09:06.388 "data_size": 65536 00:09:06.388 } 00:09:06.388 ] 00:09:06.388 }' 00:09:06.388 01:53:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.388 01:53:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.954 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.955 [2024-12-07 01:53:12.234795] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:06.955 "name": "Existed_Raid", 00:09:06.955 "aliases": [ 00:09:06.955 "a10b5057-be0d-4ce0-a545-e31905bf9e34" 00:09:06.955 ], 00:09:06.955 "product_name": "Raid Volume", 00:09:06.955 "block_size": 512, 00:09:06.955 "num_blocks": 262144, 00:09:06.955 "uuid": "a10b5057-be0d-4ce0-a545-e31905bf9e34", 00:09:06.955 "assigned_rate_limits": { 00:09:06.955 "rw_ios_per_sec": 0, 00:09:06.955 "rw_mbytes_per_sec": 0, 00:09:06.955 "r_mbytes_per_sec": 0, 00:09:06.955 "w_mbytes_per_sec": 0 00:09:06.955 }, 00:09:06.955 "claimed": false, 00:09:06.955 "zoned": false, 00:09:06.955 "supported_io_types": { 00:09:06.955 "read": true, 00:09:06.955 "write": true, 00:09:06.955 "unmap": true, 00:09:06.955 "flush": true, 00:09:06.955 "reset": true, 00:09:06.955 "nvme_admin": false, 00:09:06.955 "nvme_io": false, 00:09:06.955 "nvme_io_md": false, 00:09:06.955 "write_zeroes": true, 00:09:06.955 "zcopy": false, 00:09:06.955 "get_zone_info": false, 00:09:06.955 "zone_management": false, 00:09:06.955 "zone_append": false, 00:09:06.955 "compare": false, 00:09:06.955 "compare_and_write": false, 00:09:06.955 "abort": false, 00:09:06.955 "seek_hole": false, 00:09:06.955 "seek_data": false, 00:09:06.955 "copy": false, 00:09:06.955 "nvme_iov_md": false 00:09:06.955 }, 00:09:06.955 "memory_domains": [ 00:09:06.955 { 00:09:06.955 "dma_device_id": "system", 00:09:06.955 "dma_device_type": 1 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.955 "dma_device_type": 2 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "system", 00:09:06.955 "dma_device_type": 1 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.955 "dma_device_type": 2 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "system", 00:09:06.955 "dma_device_type": 1 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.955 "dma_device_type": 2 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "system", 00:09:06.955 "dma_device_type": 1 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.955 "dma_device_type": 2 00:09:06.955 } 00:09:06.955 ], 00:09:06.955 "driver_specific": { 00:09:06.955 "raid": { 00:09:06.955 "uuid": "a10b5057-be0d-4ce0-a545-e31905bf9e34", 00:09:06.955 "strip_size_kb": 64, 00:09:06.955 "state": "online", 00:09:06.955 "raid_level": "raid0", 00:09:06.955 "superblock": false, 00:09:06.955 "num_base_bdevs": 4, 00:09:06.955 "num_base_bdevs_discovered": 4, 00:09:06.955 "num_base_bdevs_operational": 4, 00:09:06.955 "base_bdevs_list": [ 00:09:06.955 { 00:09:06.955 "name": "BaseBdev1", 00:09:06.955 "uuid": "bae1136d-f9a6-48c4-8f49-e3875d82a377", 00:09:06.955 "is_configured": true, 00:09:06.955 "data_offset": 0, 00:09:06.955 "data_size": 65536 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "name": "BaseBdev2", 00:09:06.955 "uuid": "a321a40f-46aa-4505-ae3d-22b5e7b8d710", 00:09:06.955 "is_configured": true, 00:09:06.955 "data_offset": 0, 00:09:06.955 "data_size": 65536 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "name": "BaseBdev3", 00:09:06.955 "uuid": "89258d6e-71d0-4425-b130-97eef64a4774", 00:09:06.955 "is_configured": true, 00:09:06.955 "data_offset": 0, 00:09:06.955 "data_size": 65536 00:09:06.955 }, 00:09:06.955 { 00:09:06.955 "name": "BaseBdev4", 00:09:06.955 "uuid": "5c4586bd-a196-44b8-8433-e1f2dd2a45c4", 00:09:06.955 "is_configured": true, 00:09:06.955 "data_offset": 0, 00:09:06.955 "data_size": 65536 00:09:06.955 } 00:09:06.955 ] 00:09:06.955 } 00:09:06.955 } 00:09:06.955 }' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:06.955 BaseBdev2 00:09:06.955 BaseBdev3 00:09:06.955 BaseBdev4' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.955 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.214 [2024-12-07 01:53:12.557927] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:07.214 [2024-12-07 01:53:12.557996] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.214 [2024-12-07 01:53:12.558055] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.214 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.214 "name": "Existed_Raid", 00:09:07.214 "uuid": "a10b5057-be0d-4ce0-a545-e31905bf9e34", 00:09:07.214 "strip_size_kb": 64, 00:09:07.214 "state": "offline", 00:09:07.214 "raid_level": "raid0", 00:09:07.214 "superblock": false, 00:09:07.214 "num_base_bdevs": 4, 00:09:07.214 "num_base_bdevs_discovered": 3, 00:09:07.214 "num_base_bdevs_operational": 3, 00:09:07.214 "base_bdevs_list": [ 00:09:07.214 { 00:09:07.215 "name": null, 00:09:07.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.215 "is_configured": false, 00:09:07.215 "data_offset": 0, 00:09:07.215 "data_size": 65536 00:09:07.215 }, 00:09:07.215 { 00:09:07.215 "name": "BaseBdev2", 00:09:07.215 "uuid": "a321a40f-46aa-4505-ae3d-22b5e7b8d710", 00:09:07.215 "is_configured": true, 00:09:07.215 "data_offset": 0, 00:09:07.215 "data_size": 65536 00:09:07.215 }, 00:09:07.215 { 00:09:07.215 "name": "BaseBdev3", 00:09:07.215 "uuid": "89258d6e-71d0-4425-b130-97eef64a4774", 00:09:07.215 "is_configured": true, 00:09:07.215 "data_offset": 0, 00:09:07.215 "data_size": 65536 00:09:07.215 }, 00:09:07.215 { 00:09:07.215 "name": "BaseBdev4", 00:09:07.215 "uuid": "5c4586bd-a196-44b8-8433-e1f2dd2a45c4", 00:09:07.215 "is_configured": true, 00:09:07.215 "data_offset": 0, 00:09:07.215 "data_size": 65536 00:09:07.215 } 00:09:07.215 ] 00:09:07.215 }' 00:09:07.215 01:53:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.215 01:53:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.783 [2024-12-07 01:53:13.060419] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.783 [2024-12-07 01:53:13.111496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.783 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.784 [2024-12-07 01:53:13.178080] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:07.784 [2024-12-07 01:53:13.178164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.784 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.043 BaseBdev2 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.043 [ 00:09:08.043 { 00:09:08.043 "name": "BaseBdev2", 00:09:08.043 "aliases": [ 00:09:08.043 "8a20fd45-b89e-46f0-85da-ce08c3f72daa" 00:09:08.043 ], 00:09:08.043 "product_name": "Malloc disk", 00:09:08.043 "block_size": 512, 00:09:08.043 "num_blocks": 65536, 00:09:08.043 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:08.043 "assigned_rate_limits": { 00:09:08.043 "rw_ios_per_sec": 0, 00:09:08.043 "rw_mbytes_per_sec": 0, 00:09:08.043 "r_mbytes_per_sec": 0, 00:09:08.043 "w_mbytes_per_sec": 0 00:09:08.043 }, 00:09:08.043 "claimed": false, 00:09:08.043 "zoned": false, 00:09:08.043 "supported_io_types": { 00:09:08.043 "read": true, 00:09:08.043 "write": true, 00:09:08.043 "unmap": true, 00:09:08.043 "flush": true, 00:09:08.043 "reset": true, 00:09:08.043 "nvme_admin": false, 00:09:08.043 "nvme_io": false, 00:09:08.043 "nvme_io_md": false, 00:09:08.043 "write_zeroes": true, 00:09:08.043 "zcopy": true, 00:09:08.043 "get_zone_info": false, 00:09:08.043 "zone_management": false, 00:09:08.043 "zone_append": false, 00:09:08.043 "compare": false, 00:09:08.043 "compare_and_write": false, 00:09:08.043 "abort": true, 00:09:08.043 "seek_hole": false, 00:09:08.043 "seek_data": false, 00:09:08.043 "copy": true, 00:09:08.043 "nvme_iov_md": false 00:09:08.043 }, 00:09:08.043 "memory_domains": [ 00:09:08.043 { 00:09:08.043 "dma_device_id": "system", 00:09:08.043 "dma_device_type": 1 00:09:08.043 }, 00:09:08.043 { 00:09:08.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.043 "dma_device_type": 2 00:09:08.043 } 00:09:08.043 ], 00:09:08.043 "driver_specific": {} 00:09:08.043 } 00:09:08.043 ] 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.043 BaseBdev3 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.043 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 [ 00:09:08.044 { 00:09:08.044 "name": "BaseBdev3", 00:09:08.044 "aliases": [ 00:09:08.044 "b02644ab-0461-46f0-a2b8-5342af06e94a" 00:09:08.044 ], 00:09:08.044 "product_name": "Malloc disk", 00:09:08.044 "block_size": 512, 00:09:08.044 "num_blocks": 65536, 00:09:08.044 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:08.044 "assigned_rate_limits": { 00:09:08.044 "rw_ios_per_sec": 0, 00:09:08.044 "rw_mbytes_per_sec": 0, 00:09:08.044 "r_mbytes_per_sec": 0, 00:09:08.044 "w_mbytes_per_sec": 0 00:09:08.044 }, 00:09:08.044 "claimed": false, 00:09:08.044 "zoned": false, 00:09:08.044 "supported_io_types": { 00:09:08.044 "read": true, 00:09:08.044 "write": true, 00:09:08.044 "unmap": true, 00:09:08.044 "flush": true, 00:09:08.044 "reset": true, 00:09:08.044 "nvme_admin": false, 00:09:08.044 "nvme_io": false, 00:09:08.044 "nvme_io_md": false, 00:09:08.044 "write_zeroes": true, 00:09:08.044 "zcopy": true, 00:09:08.044 "get_zone_info": false, 00:09:08.044 "zone_management": false, 00:09:08.044 "zone_append": false, 00:09:08.044 "compare": false, 00:09:08.044 "compare_and_write": false, 00:09:08.044 "abort": true, 00:09:08.044 "seek_hole": false, 00:09:08.044 "seek_data": false, 00:09:08.044 "copy": true, 00:09:08.044 "nvme_iov_md": false 00:09:08.044 }, 00:09:08.044 "memory_domains": [ 00:09:08.044 { 00:09:08.044 "dma_device_id": "system", 00:09:08.044 "dma_device_type": 1 00:09:08.044 }, 00:09:08.044 { 00:09:08.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.044 "dma_device_type": 2 00:09:08.044 } 00:09:08.044 ], 00:09:08.044 "driver_specific": {} 00:09:08.044 } 00:09:08.044 ] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 BaseBdev4 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 [ 00:09:08.044 { 00:09:08.044 "name": "BaseBdev4", 00:09:08.044 "aliases": [ 00:09:08.044 "6e81ea4a-bb59-4303-b6bd-a3762222e3ee" 00:09:08.044 ], 00:09:08.044 "product_name": "Malloc disk", 00:09:08.044 "block_size": 512, 00:09:08.044 "num_blocks": 65536, 00:09:08.044 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:08.044 "assigned_rate_limits": { 00:09:08.044 "rw_ios_per_sec": 0, 00:09:08.044 "rw_mbytes_per_sec": 0, 00:09:08.044 "r_mbytes_per_sec": 0, 00:09:08.044 "w_mbytes_per_sec": 0 00:09:08.044 }, 00:09:08.044 "claimed": false, 00:09:08.044 "zoned": false, 00:09:08.044 "supported_io_types": { 00:09:08.044 "read": true, 00:09:08.044 "write": true, 00:09:08.044 "unmap": true, 00:09:08.044 "flush": true, 00:09:08.044 "reset": true, 00:09:08.044 "nvme_admin": false, 00:09:08.044 "nvme_io": false, 00:09:08.044 "nvme_io_md": false, 00:09:08.044 "write_zeroes": true, 00:09:08.044 "zcopy": true, 00:09:08.044 "get_zone_info": false, 00:09:08.044 "zone_management": false, 00:09:08.044 "zone_append": false, 00:09:08.044 "compare": false, 00:09:08.044 "compare_and_write": false, 00:09:08.044 "abort": true, 00:09:08.044 "seek_hole": false, 00:09:08.044 "seek_data": false, 00:09:08.044 "copy": true, 00:09:08.044 "nvme_iov_md": false 00:09:08.044 }, 00:09:08.044 "memory_domains": [ 00:09:08.044 { 00:09:08.044 "dma_device_id": "system", 00:09:08.044 "dma_device_type": 1 00:09:08.044 }, 00:09:08.044 { 00:09:08.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.044 "dma_device_type": 2 00:09:08.044 } 00:09:08.044 ], 00:09:08.044 "driver_specific": {} 00:09:08.044 } 00:09:08.044 ] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 [2024-12-07 01:53:13.401302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.044 [2024-12-07 01:53:13.401385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.044 [2024-12-07 01:53:13.401442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.044 [2024-12-07 01:53:13.403240] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.044 [2024-12-07 01:53:13.403325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.044 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.044 "name": "Existed_Raid", 00:09:08.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.044 "strip_size_kb": 64, 00:09:08.044 "state": "configuring", 00:09:08.044 "raid_level": "raid0", 00:09:08.044 "superblock": false, 00:09:08.044 "num_base_bdevs": 4, 00:09:08.044 "num_base_bdevs_discovered": 3, 00:09:08.044 "num_base_bdevs_operational": 4, 00:09:08.044 "base_bdevs_list": [ 00:09:08.044 { 00:09:08.044 "name": "BaseBdev1", 00:09:08.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.044 "is_configured": false, 00:09:08.044 "data_offset": 0, 00:09:08.044 "data_size": 0 00:09:08.044 }, 00:09:08.044 { 00:09:08.044 "name": "BaseBdev2", 00:09:08.044 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:08.044 "is_configured": true, 00:09:08.044 "data_offset": 0, 00:09:08.044 "data_size": 65536 00:09:08.044 }, 00:09:08.044 { 00:09:08.044 "name": "BaseBdev3", 00:09:08.044 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:08.044 "is_configured": true, 00:09:08.044 "data_offset": 0, 00:09:08.045 "data_size": 65536 00:09:08.045 }, 00:09:08.045 { 00:09:08.045 "name": "BaseBdev4", 00:09:08.045 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:08.045 "is_configured": true, 00:09:08.045 "data_offset": 0, 00:09:08.045 "data_size": 65536 00:09:08.045 } 00:09:08.045 ] 00:09:08.045 }' 00:09:08.045 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.045 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.625 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:08.625 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.625 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.625 [2024-12-07 01:53:13.820568] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:08.625 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.625 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.625 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.626 "name": "Existed_Raid", 00:09:08.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.626 "strip_size_kb": 64, 00:09:08.626 "state": "configuring", 00:09:08.626 "raid_level": "raid0", 00:09:08.626 "superblock": false, 00:09:08.626 "num_base_bdevs": 4, 00:09:08.626 "num_base_bdevs_discovered": 2, 00:09:08.626 "num_base_bdevs_operational": 4, 00:09:08.626 "base_bdevs_list": [ 00:09:08.626 { 00:09:08.626 "name": "BaseBdev1", 00:09:08.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.626 "is_configured": false, 00:09:08.626 "data_offset": 0, 00:09:08.626 "data_size": 0 00:09:08.626 }, 00:09:08.626 { 00:09:08.626 "name": null, 00:09:08.626 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:08.626 "is_configured": false, 00:09:08.626 "data_offset": 0, 00:09:08.626 "data_size": 65536 00:09:08.626 }, 00:09:08.626 { 00:09:08.626 "name": "BaseBdev3", 00:09:08.626 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:08.626 "is_configured": true, 00:09:08.626 "data_offset": 0, 00:09:08.626 "data_size": 65536 00:09:08.626 }, 00:09:08.626 { 00:09:08.626 "name": "BaseBdev4", 00:09:08.626 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:08.626 "is_configured": true, 00:09:08.626 "data_offset": 0, 00:09:08.626 "data_size": 65536 00:09:08.626 } 00:09:08.626 ] 00:09:08.626 }' 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.626 01:53:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.903 [2024-12-07 01:53:14.302416] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.903 BaseBdev1 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.903 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.903 [ 00:09:08.903 { 00:09:08.903 "name": "BaseBdev1", 00:09:08.903 "aliases": [ 00:09:08.903 "df5ce68e-647a-4c4a-9555-1a132f697e8c" 00:09:08.903 ], 00:09:08.903 "product_name": "Malloc disk", 00:09:08.903 "block_size": 512, 00:09:08.903 "num_blocks": 65536, 00:09:08.903 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:08.903 "assigned_rate_limits": { 00:09:08.903 "rw_ios_per_sec": 0, 00:09:08.903 "rw_mbytes_per_sec": 0, 00:09:08.903 "r_mbytes_per_sec": 0, 00:09:08.903 "w_mbytes_per_sec": 0 00:09:08.903 }, 00:09:08.903 "claimed": true, 00:09:08.903 "claim_type": "exclusive_write", 00:09:08.903 "zoned": false, 00:09:08.903 "supported_io_types": { 00:09:08.903 "read": true, 00:09:08.903 "write": true, 00:09:08.903 "unmap": true, 00:09:08.903 "flush": true, 00:09:08.903 "reset": true, 00:09:08.903 "nvme_admin": false, 00:09:08.903 "nvme_io": false, 00:09:08.903 "nvme_io_md": false, 00:09:08.903 "write_zeroes": true, 00:09:08.903 "zcopy": true, 00:09:08.903 "get_zone_info": false, 00:09:08.903 "zone_management": false, 00:09:08.903 "zone_append": false, 00:09:08.903 "compare": false, 00:09:08.903 "compare_and_write": false, 00:09:08.903 "abort": true, 00:09:08.903 "seek_hole": false, 00:09:08.903 "seek_data": false, 00:09:08.903 "copy": true, 00:09:08.903 "nvme_iov_md": false 00:09:08.903 }, 00:09:08.904 "memory_domains": [ 00:09:08.904 { 00:09:08.904 "dma_device_id": "system", 00:09:08.904 "dma_device_type": 1 00:09:08.904 }, 00:09:08.904 { 00:09:08.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.904 "dma_device_type": 2 00:09:08.904 } 00:09:08.904 ], 00:09:08.904 "driver_specific": {} 00:09:08.904 } 00:09:08.904 ] 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.904 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.163 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.163 "name": "Existed_Raid", 00:09:09.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.163 "strip_size_kb": 64, 00:09:09.163 "state": "configuring", 00:09:09.163 "raid_level": "raid0", 00:09:09.163 "superblock": false, 00:09:09.163 "num_base_bdevs": 4, 00:09:09.163 "num_base_bdevs_discovered": 3, 00:09:09.163 "num_base_bdevs_operational": 4, 00:09:09.163 "base_bdevs_list": [ 00:09:09.163 { 00:09:09.163 "name": "BaseBdev1", 00:09:09.163 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:09.163 "is_configured": true, 00:09:09.163 "data_offset": 0, 00:09:09.163 "data_size": 65536 00:09:09.163 }, 00:09:09.163 { 00:09:09.163 "name": null, 00:09:09.163 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:09.163 "is_configured": false, 00:09:09.163 "data_offset": 0, 00:09:09.163 "data_size": 65536 00:09:09.163 }, 00:09:09.163 { 00:09:09.163 "name": "BaseBdev3", 00:09:09.163 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:09.163 "is_configured": true, 00:09:09.163 "data_offset": 0, 00:09:09.163 "data_size": 65536 00:09:09.163 }, 00:09:09.163 { 00:09:09.163 "name": "BaseBdev4", 00:09:09.163 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:09.163 "is_configured": true, 00:09:09.163 "data_offset": 0, 00:09:09.163 "data_size": 65536 00:09:09.163 } 00:09:09.163 ] 00:09:09.163 }' 00:09:09.163 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.164 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 [2024-12-07 01:53:14.777624] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.423 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.423 "name": "Existed_Raid", 00:09:09.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.423 "strip_size_kb": 64, 00:09:09.423 "state": "configuring", 00:09:09.423 "raid_level": "raid0", 00:09:09.423 "superblock": false, 00:09:09.423 "num_base_bdevs": 4, 00:09:09.423 "num_base_bdevs_discovered": 2, 00:09:09.423 "num_base_bdevs_operational": 4, 00:09:09.423 "base_bdevs_list": [ 00:09:09.423 { 00:09:09.423 "name": "BaseBdev1", 00:09:09.423 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:09.423 "is_configured": true, 00:09:09.423 "data_offset": 0, 00:09:09.423 "data_size": 65536 00:09:09.423 }, 00:09:09.423 { 00:09:09.423 "name": null, 00:09:09.423 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:09.423 "is_configured": false, 00:09:09.423 "data_offset": 0, 00:09:09.423 "data_size": 65536 00:09:09.423 }, 00:09:09.423 { 00:09:09.423 "name": null, 00:09:09.423 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:09.423 "is_configured": false, 00:09:09.423 "data_offset": 0, 00:09:09.423 "data_size": 65536 00:09:09.423 }, 00:09:09.423 { 00:09:09.423 "name": "BaseBdev4", 00:09:09.423 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:09.423 "is_configured": true, 00:09:09.423 "data_offset": 0, 00:09:09.423 "data_size": 65536 00:09:09.423 } 00:09:09.423 ] 00:09:09.423 }' 00:09:09.424 01:53:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.424 01:53:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.991 [2024-12-07 01:53:15.244879] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.991 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.992 "name": "Existed_Raid", 00:09:09.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.992 "strip_size_kb": 64, 00:09:09.992 "state": "configuring", 00:09:09.992 "raid_level": "raid0", 00:09:09.992 "superblock": false, 00:09:09.992 "num_base_bdevs": 4, 00:09:09.992 "num_base_bdevs_discovered": 3, 00:09:09.992 "num_base_bdevs_operational": 4, 00:09:09.992 "base_bdevs_list": [ 00:09:09.992 { 00:09:09.992 "name": "BaseBdev1", 00:09:09.992 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:09.992 "is_configured": true, 00:09:09.992 "data_offset": 0, 00:09:09.992 "data_size": 65536 00:09:09.992 }, 00:09:09.992 { 00:09:09.992 "name": null, 00:09:09.992 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:09.992 "is_configured": false, 00:09:09.992 "data_offset": 0, 00:09:09.992 "data_size": 65536 00:09:09.992 }, 00:09:09.992 { 00:09:09.992 "name": "BaseBdev3", 00:09:09.992 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:09.992 "is_configured": true, 00:09:09.992 "data_offset": 0, 00:09:09.992 "data_size": 65536 00:09:09.992 }, 00:09:09.992 { 00:09:09.992 "name": "BaseBdev4", 00:09:09.992 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:09.992 "is_configured": true, 00:09:09.992 "data_offset": 0, 00:09:09.992 "data_size": 65536 00:09:09.992 } 00:09:09.992 ] 00:09:09.992 }' 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.992 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.251 [2024-12-07 01:53:15.696088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.251 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.510 "name": "Existed_Raid", 00:09:10.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.510 "strip_size_kb": 64, 00:09:10.510 "state": "configuring", 00:09:10.510 "raid_level": "raid0", 00:09:10.510 "superblock": false, 00:09:10.510 "num_base_bdevs": 4, 00:09:10.510 "num_base_bdevs_discovered": 2, 00:09:10.510 "num_base_bdevs_operational": 4, 00:09:10.510 "base_bdevs_list": [ 00:09:10.510 { 00:09:10.510 "name": null, 00:09:10.510 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:10.510 "is_configured": false, 00:09:10.510 "data_offset": 0, 00:09:10.510 "data_size": 65536 00:09:10.510 }, 00:09:10.510 { 00:09:10.510 "name": null, 00:09:10.510 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:10.510 "is_configured": false, 00:09:10.510 "data_offset": 0, 00:09:10.510 "data_size": 65536 00:09:10.510 }, 00:09:10.510 { 00:09:10.510 "name": "BaseBdev3", 00:09:10.510 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:10.510 "is_configured": true, 00:09:10.510 "data_offset": 0, 00:09:10.510 "data_size": 65536 00:09:10.510 }, 00:09:10.510 { 00:09:10.510 "name": "BaseBdev4", 00:09:10.510 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:10.510 "is_configured": true, 00:09:10.510 "data_offset": 0, 00:09:10.510 "data_size": 65536 00:09:10.510 } 00:09:10.510 ] 00:09:10.510 }' 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.510 01:53:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.769 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.769 [2024-12-07 01:53:16.225639] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.027 "name": "Existed_Raid", 00:09:11.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.027 "strip_size_kb": 64, 00:09:11.027 "state": "configuring", 00:09:11.027 "raid_level": "raid0", 00:09:11.027 "superblock": false, 00:09:11.027 "num_base_bdevs": 4, 00:09:11.027 "num_base_bdevs_discovered": 3, 00:09:11.027 "num_base_bdevs_operational": 4, 00:09:11.027 "base_bdevs_list": [ 00:09:11.027 { 00:09:11.027 "name": null, 00:09:11.027 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:11.027 "is_configured": false, 00:09:11.027 "data_offset": 0, 00:09:11.027 "data_size": 65536 00:09:11.027 }, 00:09:11.027 { 00:09:11.027 "name": "BaseBdev2", 00:09:11.027 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:11.027 "is_configured": true, 00:09:11.027 "data_offset": 0, 00:09:11.027 "data_size": 65536 00:09:11.027 }, 00:09:11.027 { 00:09:11.027 "name": "BaseBdev3", 00:09:11.027 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:11.027 "is_configured": true, 00:09:11.027 "data_offset": 0, 00:09:11.027 "data_size": 65536 00:09:11.027 }, 00:09:11.027 { 00:09:11.027 "name": "BaseBdev4", 00:09:11.027 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:11.027 "is_configured": true, 00:09:11.027 "data_offset": 0, 00:09:11.027 "data_size": 65536 00:09:11.027 } 00:09:11.027 ] 00:09:11.027 }' 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.027 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:11.286 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u df5ce68e-647a-4c4a-9555-1a132f697e8c 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.546 [2024-12-07 01:53:16.779432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:11.546 [2024-12-07 01:53:16.779539] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:11.546 [2024-12-07 01:53:16.779564] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:11.546 [2024-12-07 01:53:16.779844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:11.546 [2024-12-07 01:53:16.779988] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:11.546 [2024-12-07 01:53:16.780028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:11.546 [2024-12-07 01:53:16.780227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.546 NewBaseBdev 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.546 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.546 [ 00:09:11.546 { 00:09:11.546 "name": "NewBaseBdev", 00:09:11.546 "aliases": [ 00:09:11.546 "df5ce68e-647a-4c4a-9555-1a132f697e8c" 00:09:11.547 ], 00:09:11.547 "product_name": "Malloc disk", 00:09:11.547 "block_size": 512, 00:09:11.547 "num_blocks": 65536, 00:09:11.547 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:11.547 "assigned_rate_limits": { 00:09:11.547 "rw_ios_per_sec": 0, 00:09:11.547 "rw_mbytes_per_sec": 0, 00:09:11.547 "r_mbytes_per_sec": 0, 00:09:11.547 "w_mbytes_per_sec": 0 00:09:11.547 }, 00:09:11.547 "claimed": true, 00:09:11.547 "claim_type": "exclusive_write", 00:09:11.547 "zoned": false, 00:09:11.547 "supported_io_types": { 00:09:11.547 "read": true, 00:09:11.547 "write": true, 00:09:11.547 "unmap": true, 00:09:11.547 "flush": true, 00:09:11.547 "reset": true, 00:09:11.547 "nvme_admin": false, 00:09:11.547 "nvme_io": false, 00:09:11.547 "nvme_io_md": false, 00:09:11.547 "write_zeroes": true, 00:09:11.547 "zcopy": true, 00:09:11.547 "get_zone_info": false, 00:09:11.547 "zone_management": false, 00:09:11.547 "zone_append": false, 00:09:11.547 "compare": false, 00:09:11.547 "compare_and_write": false, 00:09:11.547 "abort": true, 00:09:11.547 "seek_hole": false, 00:09:11.547 "seek_data": false, 00:09:11.547 "copy": true, 00:09:11.547 "nvme_iov_md": false 00:09:11.547 }, 00:09:11.547 "memory_domains": [ 00:09:11.547 { 00:09:11.547 "dma_device_id": "system", 00:09:11.547 "dma_device_type": 1 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.547 "dma_device_type": 2 00:09:11.547 } 00:09:11.547 ], 00:09:11.547 "driver_specific": {} 00:09:11.547 } 00:09:11.547 ] 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.547 "name": "Existed_Raid", 00:09:11.547 "uuid": "d5ca4c0d-ed8c-41a4-a266-d8905910e16e", 00:09:11.547 "strip_size_kb": 64, 00:09:11.547 "state": "online", 00:09:11.547 "raid_level": "raid0", 00:09:11.547 "superblock": false, 00:09:11.547 "num_base_bdevs": 4, 00:09:11.547 "num_base_bdevs_discovered": 4, 00:09:11.547 "num_base_bdevs_operational": 4, 00:09:11.547 "base_bdevs_list": [ 00:09:11.547 { 00:09:11.547 "name": "NewBaseBdev", 00:09:11.547 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:11.547 "is_configured": true, 00:09:11.547 "data_offset": 0, 00:09:11.547 "data_size": 65536 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "name": "BaseBdev2", 00:09:11.547 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:11.547 "is_configured": true, 00:09:11.547 "data_offset": 0, 00:09:11.547 "data_size": 65536 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "name": "BaseBdev3", 00:09:11.547 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:11.547 "is_configured": true, 00:09:11.547 "data_offset": 0, 00:09:11.547 "data_size": 65536 00:09:11.547 }, 00:09:11.547 { 00:09:11.547 "name": "BaseBdev4", 00:09:11.547 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:11.547 "is_configured": true, 00:09:11.547 "data_offset": 0, 00:09:11.547 "data_size": 65536 00:09:11.547 } 00:09:11.547 ] 00:09:11.547 }' 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.547 01:53:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.807 [2024-12-07 01:53:17.223106] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.807 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.807 "name": "Existed_Raid", 00:09:11.807 "aliases": [ 00:09:11.807 "d5ca4c0d-ed8c-41a4-a266-d8905910e16e" 00:09:11.807 ], 00:09:11.807 "product_name": "Raid Volume", 00:09:11.807 "block_size": 512, 00:09:11.807 "num_blocks": 262144, 00:09:11.807 "uuid": "d5ca4c0d-ed8c-41a4-a266-d8905910e16e", 00:09:11.807 "assigned_rate_limits": { 00:09:11.807 "rw_ios_per_sec": 0, 00:09:11.807 "rw_mbytes_per_sec": 0, 00:09:11.807 "r_mbytes_per_sec": 0, 00:09:11.807 "w_mbytes_per_sec": 0 00:09:11.807 }, 00:09:11.807 "claimed": false, 00:09:11.807 "zoned": false, 00:09:11.807 "supported_io_types": { 00:09:11.807 "read": true, 00:09:11.807 "write": true, 00:09:11.807 "unmap": true, 00:09:11.807 "flush": true, 00:09:11.807 "reset": true, 00:09:11.807 "nvme_admin": false, 00:09:11.807 "nvme_io": false, 00:09:11.807 "nvme_io_md": false, 00:09:11.808 "write_zeroes": true, 00:09:11.808 "zcopy": false, 00:09:11.808 "get_zone_info": false, 00:09:11.808 "zone_management": false, 00:09:11.808 "zone_append": false, 00:09:11.808 "compare": false, 00:09:11.808 "compare_and_write": false, 00:09:11.808 "abort": false, 00:09:11.808 "seek_hole": false, 00:09:11.808 "seek_data": false, 00:09:11.808 "copy": false, 00:09:11.808 "nvme_iov_md": false 00:09:11.808 }, 00:09:11.808 "memory_domains": [ 00:09:11.808 { 00:09:11.808 "dma_device_id": "system", 00:09:11.808 "dma_device_type": 1 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.808 "dma_device_type": 2 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "system", 00:09:11.808 "dma_device_type": 1 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.808 "dma_device_type": 2 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "system", 00:09:11.808 "dma_device_type": 1 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.808 "dma_device_type": 2 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "system", 00:09:11.808 "dma_device_type": 1 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.808 "dma_device_type": 2 00:09:11.808 } 00:09:11.808 ], 00:09:11.808 "driver_specific": { 00:09:11.808 "raid": { 00:09:11.808 "uuid": "d5ca4c0d-ed8c-41a4-a266-d8905910e16e", 00:09:11.808 "strip_size_kb": 64, 00:09:11.808 "state": "online", 00:09:11.808 "raid_level": "raid0", 00:09:11.808 "superblock": false, 00:09:11.808 "num_base_bdevs": 4, 00:09:11.808 "num_base_bdevs_discovered": 4, 00:09:11.808 "num_base_bdevs_operational": 4, 00:09:11.808 "base_bdevs_list": [ 00:09:11.808 { 00:09:11.808 "name": "NewBaseBdev", 00:09:11.808 "uuid": "df5ce68e-647a-4c4a-9555-1a132f697e8c", 00:09:11.808 "is_configured": true, 00:09:11.808 "data_offset": 0, 00:09:11.808 "data_size": 65536 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "name": "BaseBdev2", 00:09:11.808 "uuid": "8a20fd45-b89e-46f0-85da-ce08c3f72daa", 00:09:11.808 "is_configured": true, 00:09:11.808 "data_offset": 0, 00:09:11.808 "data_size": 65536 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "name": "BaseBdev3", 00:09:11.808 "uuid": "b02644ab-0461-46f0-a2b8-5342af06e94a", 00:09:11.808 "is_configured": true, 00:09:11.808 "data_offset": 0, 00:09:11.808 "data_size": 65536 00:09:11.808 }, 00:09:11.808 { 00:09:11.808 "name": "BaseBdev4", 00:09:11.808 "uuid": "6e81ea4a-bb59-4303-b6bd-a3762222e3ee", 00:09:11.808 "is_configured": true, 00:09:11.808 "data_offset": 0, 00:09:11.808 "data_size": 65536 00:09:11.808 } 00:09:11.808 ] 00:09:11.808 } 00:09:11.808 } 00:09:11.808 }' 00:09:11.808 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:12.068 BaseBdev2 00:09:12.068 BaseBdev3 00:09:12.068 BaseBdev4' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.068 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.328 [2024-12-07 01:53:17.550193] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.328 [2024-12-07 01:53:17.550260] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.328 [2024-12-07 01:53:17.550384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.328 [2024-12-07 01:53:17.550474] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.328 [2024-12-07 01:53:17.550520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80083 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80083 ']' 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80083 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80083 00:09:12.328 killing process with pid 80083 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80083' 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 80083 00:09:12.328 [2024-12-07 01:53:17.598458] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:12.328 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 80083 00:09:12.328 [2024-12-07 01:53:17.638909] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:12.588 01:53:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:12.588 00:09:12.588 real 0m9.448s 00:09:12.588 user 0m16.267s 00:09:12.588 sys 0m1.879s 00:09:12.588 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.589 ************************************ 00:09:12.589 END TEST raid_state_function_test 00:09:12.589 ************************************ 00:09:12.589 01:53:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:12.589 01:53:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:12.589 01:53:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.589 01:53:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:12.589 ************************************ 00:09:12.589 START TEST raid_state_function_test_sb 00:09:12.589 ************************************ 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80732 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80732' 00:09:12.589 Process raid pid: 80732 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80732 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80732 ']' 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.589 01:53:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.589 [2024-12-07 01:53:18.031124] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:12.589 [2024-12-07 01:53:18.031335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.848 [2024-12-07 01:53:18.176567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.849 [2024-12-07 01:53:18.221926] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.849 [2024-12-07 01:53:18.263523] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.849 [2024-12-07 01:53:18.263619] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.418 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.418 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:13.418 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:13.418 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.418 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.418 [2024-12-07 01:53:18.856359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.418 [2024-12-07 01:53:18.856456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.418 [2024-12-07 01:53:18.856487] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.418 [2024-12-07 01:53:18.856513] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.418 [2024-12-07 01:53:18.856531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.418 [2024-12-07 01:53:18.856555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.418 [2024-12-07 01:53:18.856572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:13.418 [2024-12-07 01:53:18.856592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:13.418 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.419 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.678 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.678 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.678 "name": "Existed_Raid", 00:09:13.678 "uuid": "8d881708-5559-40cf-bd45-bdb7d1d0de22", 00:09:13.678 "strip_size_kb": 64, 00:09:13.678 "state": "configuring", 00:09:13.678 "raid_level": "raid0", 00:09:13.678 "superblock": true, 00:09:13.678 "num_base_bdevs": 4, 00:09:13.678 "num_base_bdevs_discovered": 0, 00:09:13.678 "num_base_bdevs_operational": 4, 00:09:13.678 "base_bdevs_list": [ 00:09:13.678 { 00:09:13.678 "name": "BaseBdev1", 00:09:13.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.678 "is_configured": false, 00:09:13.678 "data_offset": 0, 00:09:13.678 "data_size": 0 00:09:13.678 }, 00:09:13.678 { 00:09:13.678 "name": "BaseBdev2", 00:09:13.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.678 "is_configured": false, 00:09:13.678 "data_offset": 0, 00:09:13.678 "data_size": 0 00:09:13.678 }, 00:09:13.678 { 00:09:13.678 "name": "BaseBdev3", 00:09:13.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.678 "is_configured": false, 00:09:13.678 "data_offset": 0, 00:09:13.678 "data_size": 0 00:09:13.678 }, 00:09:13.678 { 00:09:13.678 "name": "BaseBdev4", 00:09:13.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.678 "is_configured": false, 00:09:13.678 "data_offset": 0, 00:09:13.678 "data_size": 0 00:09:13.678 } 00:09:13.678 ] 00:09:13.678 }' 00:09:13.678 01:53:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.678 01:53:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 [2024-12-07 01:53:19.295529] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.939 [2024-12-07 01:53:19.295609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 [2024-12-07 01:53:19.307534] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.939 [2024-12-07 01:53:19.307607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.939 [2024-12-07 01:53:19.307638] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.939 [2024-12-07 01:53:19.307674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.939 [2024-12-07 01:53:19.307736] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.939 [2024-12-07 01:53:19.307748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.939 [2024-12-07 01:53:19.307755] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:13.939 [2024-12-07 01:53:19.307764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 [2024-12-07 01:53:19.328413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.939 BaseBdev1 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 [ 00:09:13.939 { 00:09:13.939 "name": "BaseBdev1", 00:09:13.939 "aliases": [ 00:09:13.939 "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a" 00:09:13.939 ], 00:09:13.939 "product_name": "Malloc disk", 00:09:13.939 "block_size": 512, 00:09:13.939 "num_blocks": 65536, 00:09:13.939 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:13.939 "assigned_rate_limits": { 00:09:13.939 "rw_ios_per_sec": 0, 00:09:13.939 "rw_mbytes_per_sec": 0, 00:09:13.939 "r_mbytes_per_sec": 0, 00:09:13.939 "w_mbytes_per_sec": 0 00:09:13.939 }, 00:09:13.939 "claimed": true, 00:09:13.939 "claim_type": "exclusive_write", 00:09:13.939 "zoned": false, 00:09:13.939 "supported_io_types": { 00:09:13.939 "read": true, 00:09:13.939 "write": true, 00:09:13.939 "unmap": true, 00:09:13.939 "flush": true, 00:09:13.939 "reset": true, 00:09:13.939 "nvme_admin": false, 00:09:13.939 "nvme_io": false, 00:09:13.939 "nvme_io_md": false, 00:09:13.939 "write_zeroes": true, 00:09:13.939 "zcopy": true, 00:09:13.939 "get_zone_info": false, 00:09:13.939 "zone_management": false, 00:09:13.939 "zone_append": false, 00:09:13.939 "compare": false, 00:09:13.939 "compare_and_write": false, 00:09:13.939 "abort": true, 00:09:13.939 "seek_hole": false, 00:09:13.939 "seek_data": false, 00:09:13.939 "copy": true, 00:09:13.939 "nvme_iov_md": false 00:09:13.939 }, 00:09:13.939 "memory_domains": [ 00:09:13.939 { 00:09:13.939 "dma_device_id": "system", 00:09:13.939 "dma_device_type": 1 00:09:13.939 }, 00:09:13.939 { 00:09:13.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.939 "dma_device_type": 2 00:09:13.939 } 00:09:13.939 ], 00:09:13.939 "driver_specific": {} 00:09:13.939 } 00:09:13.939 ] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.939 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.200 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.200 "name": "Existed_Raid", 00:09:14.200 "uuid": "a9abab90-54ab-4f25-9ae1-1360b7e650ad", 00:09:14.200 "strip_size_kb": 64, 00:09:14.200 "state": "configuring", 00:09:14.200 "raid_level": "raid0", 00:09:14.200 "superblock": true, 00:09:14.200 "num_base_bdevs": 4, 00:09:14.200 "num_base_bdevs_discovered": 1, 00:09:14.200 "num_base_bdevs_operational": 4, 00:09:14.200 "base_bdevs_list": [ 00:09:14.200 { 00:09:14.200 "name": "BaseBdev1", 00:09:14.200 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:14.200 "is_configured": true, 00:09:14.200 "data_offset": 2048, 00:09:14.200 "data_size": 63488 00:09:14.200 }, 00:09:14.200 { 00:09:14.200 "name": "BaseBdev2", 00:09:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.200 "is_configured": false, 00:09:14.200 "data_offset": 0, 00:09:14.200 "data_size": 0 00:09:14.200 }, 00:09:14.200 { 00:09:14.200 "name": "BaseBdev3", 00:09:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.200 "is_configured": false, 00:09:14.200 "data_offset": 0, 00:09:14.200 "data_size": 0 00:09:14.200 }, 00:09:14.200 { 00:09:14.200 "name": "BaseBdev4", 00:09:14.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.200 "is_configured": false, 00:09:14.200 "data_offset": 0, 00:09:14.200 "data_size": 0 00:09:14.200 } 00:09:14.200 ] 00:09:14.200 }' 00:09:14.200 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.200 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.459 [2024-12-07 01:53:19.795693] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.459 [2024-12-07 01:53:19.795783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.459 [2024-12-07 01:53:19.807729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.459 [2024-12-07 01:53:19.809527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.459 [2024-12-07 01:53:19.809608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.459 [2024-12-07 01:53:19.809637] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.459 [2024-12-07 01:53:19.809648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.459 [2024-12-07 01:53:19.809655] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:14.459 [2024-12-07 01:53:19.809674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.459 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.459 "name": "Existed_Raid", 00:09:14.459 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:14.459 "strip_size_kb": 64, 00:09:14.459 "state": "configuring", 00:09:14.459 "raid_level": "raid0", 00:09:14.459 "superblock": true, 00:09:14.459 "num_base_bdevs": 4, 00:09:14.459 "num_base_bdevs_discovered": 1, 00:09:14.459 "num_base_bdevs_operational": 4, 00:09:14.459 "base_bdevs_list": [ 00:09:14.459 { 00:09:14.459 "name": "BaseBdev1", 00:09:14.459 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:14.459 "is_configured": true, 00:09:14.459 "data_offset": 2048, 00:09:14.459 "data_size": 63488 00:09:14.459 }, 00:09:14.459 { 00:09:14.459 "name": "BaseBdev2", 00:09:14.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.459 "is_configured": false, 00:09:14.459 "data_offset": 0, 00:09:14.459 "data_size": 0 00:09:14.459 }, 00:09:14.459 { 00:09:14.459 "name": "BaseBdev3", 00:09:14.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.459 "is_configured": false, 00:09:14.459 "data_offset": 0, 00:09:14.459 "data_size": 0 00:09:14.459 }, 00:09:14.459 { 00:09:14.459 "name": "BaseBdev4", 00:09:14.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.459 "is_configured": false, 00:09:14.459 "data_offset": 0, 00:09:14.459 "data_size": 0 00:09:14.459 } 00:09:14.459 ] 00:09:14.459 }' 00:09:14.460 01:53:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.460 01:53:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.028 [2024-12-07 01:53:20.290213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.028 BaseBdev2 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.028 [ 00:09:15.028 { 00:09:15.028 "name": "BaseBdev2", 00:09:15.028 "aliases": [ 00:09:15.028 "c77b5137-3b7e-4cbc-b728-62d38d9ccacb" 00:09:15.028 ], 00:09:15.028 "product_name": "Malloc disk", 00:09:15.028 "block_size": 512, 00:09:15.028 "num_blocks": 65536, 00:09:15.028 "uuid": "c77b5137-3b7e-4cbc-b728-62d38d9ccacb", 00:09:15.028 "assigned_rate_limits": { 00:09:15.028 "rw_ios_per_sec": 0, 00:09:15.028 "rw_mbytes_per_sec": 0, 00:09:15.028 "r_mbytes_per_sec": 0, 00:09:15.028 "w_mbytes_per_sec": 0 00:09:15.028 }, 00:09:15.028 "claimed": true, 00:09:15.028 "claim_type": "exclusive_write", 00:09:15.028 "zoned": false, 00:09:15.028 "supported_io_types": { 00:09:15.028 "read": true, 00:09:15.028 "write": true, 00:09:15.028 "unmap": true, 00:09:15.028 "flush": true, 00:09:15.028 "reset": true, 00:09:15.028 "nvme_admin": false, 00:09:15.028 "nvme_io": false, 00:09:15.028 "nvme_io_md": false, 00:09:15.028 "write_zeroes": true, 00:09:15.028 "zcopy": true, 00:09:15.028 "get_zone_info": false, 00:09:15.028 "zone_management": false, 00:09:15.028 "zone_append": false, 00:09:15.028 "compare": false, 00:09:15.028 "compare_and_write": false, 00:09:15.028 "abort": true, 00:09:15.028 "seek_hole": false, 00:09:15.028 "seek_data": false, 00:09:15.028 "copy": true, 00:09:15.028 "nvme_iov_md": false 00:09:15.028 }, 00:09:15.028 "memory_domains": [ 00:09:15.028 { 00:09:15.028 "dma_device_id": "system", 00:09:15.028 "dma_device_type": 1 00:09:15.028 }, 00:09:15.028 { 00:09:15.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.028 "dma_device_type": 2 00:09:15.028 } 00:09:15.028 ], 00:09:15.028 "driver_specific": {} 00:09:15.028 } 00:09:15.028 ] 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.028 "name": "Existed_Raid", 00:09:15.028 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:15.028 "strip_size_kb": 64, 00:09:15.028 "state": "configuring", 00:09:15.028 "raid_level": "raid0", 00:09:15.028 "superblock": true, 00:09:15.028 "num_base_bdevs": 4, 00:09:15.028 "num_base_bdevs_discovered": 2, 00:09:15.028 "num_base_bdevs_operational": 4, 00:09:15.028 "base_bdevs_list": [ 00:09:15.028 { 00:09:15.028 "name": "BaseBdev1", 00:09:15.028 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:15.028 "is_configured": true, 00:09:15.028 "data_offset": 2048, 00:09:15.028 "data_size": 63488 00:09:15.028 }, 00:09:15.028 { 00:09:15.028 "name": "BaseBdev2", 00:09:15.028 "uuid": "c77b5137-3b7e-4cbc-b728-62d38d9ccacb", 00:09:15.028 "is_configured": true, 00:09:15.028 "data_offset": 2048, 00:09:15.028 "data_size": 63488 00:09:15.028 }, 00:09:15.028 { 00:09:15.028 "name": "BaseBdev3", 00:09:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.028 "is_configured": false, 00:09:15.028 "data_offset": 0, 00:09:15.028 "data_size": 0 00:09:15.028 }, 00:09:15.028 { 00:09:15.028 "name": "BaseBdev4", 00:09:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.028 "is_configured": false, 00:09:15.028 "data_offset": 0, 00:09:15.028 "data_size": 0 00:09:15.028 } 00:09:15.028 ] 00:09:15.028 }' 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.028 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.596 [2024-12-07 01:53:20.792113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.596 BaseBdev3 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.596 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.596 [ 00:09:15.596 { 00:09:15.596 "name": "BaseBdev3", 00:09:15.596 "aliases": [ 00:09:15.596 "85bc1222-d1bc-4bf2-8768-93872d00439e" 00:09:15.596 ], 00:09:15.596 "product_name": "Malloc disk", 00:09:15.596 "block_size": 512, 00:09:15.596 "num_blocks": 65536, 00:09:15.596 "uuid": "85bc1222-d1bc-4bf2-8768-93872d00439e", 00:09:15.596 "assigned_rate_limits": { 00:09:15.596 "rw_ios_per_sec": 0, 00:09:15.596 "rw_mbytes_per_sec": 0, 00:09:15.597 "r_mbytes_per_sec": 0, 00:09:15.597 "w_mbytes_per_sec": 0 00:09:15.597 }, 00:09:15.597 "claimed": true, 00:09:15.597 "claim_type": "exclusive_write", 00:09:15.597 "zoned": false, 00:09:15.597 "supported_io_types": { 00:09:15.597 "read": true, 00:09:15.597 "write": true, 00:09:15.597 "unmap": true, 00:09:15.597 "flush": true, 00:09:15.597 "reset": true, 00:09:15.597 "nvme_admin": false, 00:09:15.597 "nvme_io": false, 00:09:15.597 "nvme_io_md": false, 00:09:15.597 "write_zeroes": true, 00:09:15.597 "zcopy": true, 00:09:15.597 "get_zone_info": false, 00:09:15.597 "zone_management": false, 00:09:15.597 "zone_append": false, 00:09:15.597 "compare": false, 00:09:15.597 "compare_and_write": false, 00:09:15.597 "abort": true, 00:09:15.597 "seek_hole": false, 00:09:15.597 "seek_data": false, 00:09:15.597 "copy": true, 00:09:15.597 "nvme_iov_md": false 00:09:15.597 }, 00:09:15.597 "memory_domains": [ 00:09:15.597 { 00:09:15.597 "dma_device_id": "system", 00:09:15.597 "dma_device_type": 1 00:09:15.597 }, 00:09:15.597 { 00:09:15.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.597 "dma_device_type": 2 00:09:15.597 } 00:09:15.597 ], 00:09:15.597 "driver_specific": {} 00:09:15.597 } 00:09:15.597 ] 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.597 "name": "Existed_Raid", 00:09:15.597 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:15.597 "strip_size_kb": 64, 00:09:15.597 "state": "configuring", 00:09:15.597 "raid_level": "raid0", 00:09:15.597 "superblock": true, 00:09:15.597 "num_base_bdevs": 4, 00:09:15.597 "num_base_bdevs_discovered": 3, 00:09:15.597 "num_base_bdevs_operational": 4, 00:09:15.597 "base_bdevs_list": [ 00:09:15.597 { 00:09:15.597 "name": "BaseBdev1", 00:09:15.597 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:15.597 "is_configured": true, 00:09:15.597 "data_offset": 2048, 00:09:15.597 "data_size": 63488 00:09:15.597 }, 00:09:15.597 { 00:09:15.597 "name": "BaseBdev2", 00:09:15.597 "uuid": "c77b5137-3b7e-4cbc-b728-62d38d9ccacb", 00:09:15.597 "is_configured": true, 00:09:15.597 "data_offset": 2048, 00:09:15.597 "data_size": 63488 00:09:15.597 }, 00:09:15.597 { 00:09:15.597 "name": "BaseBdev3", 00:09:15.597 "uuid": "85bc1222-d1bc-4bf2-8768-93872d00439e", 00:09:15.597 "is_configured": true, 00:09:15.597 "data_offset": 2048, 00:09:15.597 "data_size": 63488 00:09:15.597 }, 00:09:15.597 { 00:09:15.597 "name": "BaseBdev4", 00:09:15.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.597 "is_configured": false, 00:09:15.597 "data_offset": 0, 00:09:15.597 "data_size": 0 00:09:15.597 } 00:09:15.597 ] 00:09:15.597 }' 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.597 01:53:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.856 [2024-12-07 01:53:21.282205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:15.856 [2024-12-07 01:53:21.282527] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:15.856 [2024-12-07 01:53:21.282582] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:15.856 BaseBdev4 00:09:15.856 [2024-12-07 01:53:21.282905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:15.856 [2024-12-07 01:53:21.283042] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:15.856 [2024-12-07 01:53:21.283076] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:15.856 [2024-12-07 01:53:21.283197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:15.856 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.857 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.857 [ 00:09:15.857 { 00:09:15.857 "name": "BaseBdev4", 00:09:15.857 "aliases": [ 00:09:15.857 "9dfff03c-7f4c-493f-9002-af78d6989502" 00:09:15.857 ], 00:09:15.857 "product_name": "Malloc disk", 00:09:15.857 "block_size": 512, 00:09:15.857 "num_blocks": 65536, 00:09:15.857 "uuid": "9dfff03c-7f4c-493f-9002-af78d6989502", 00:09:15.857 "assigned_rate_limits": { 00:09:15.857 "rw_ios_per_sec": 0, 00:09:15.857 "rw_mbytes_per_sec": 0, 00:09:15.857 "r_mbytes_per_sec": 0, 00:09:15.857 "w_mbytes_per_sec": 0 00:09:15.857 }, 00:09:15.857 "claimed": true, 00:09:15.857 "claim_type": "exclusive_write", 00:09:15.857 "zoned": false, 00:09:15.857 "supported_io_types": { 00:09:15.857 "read": true, 00:09:15.857 "write": true, 00:09:15.857 "unmap": true, 00:09:15.857 "flush": true, 00:09:15.857 "reset": true, 00:09:15.857 "nvme_admin": false, 00:09:15.857 "nvme_io": false, 00:09:15.857 "nvme_io_md": false, 00:09:15.857 "write_zeroes": true, 00:09:15.857 "zcopy": true, 00:09:15.857 "get_zone_info": false, 00:09:15.857 "zone_management": false, 00:09:16.116 "zone_append": false, 00:09:16.116 "compare": false, 00:09:16.116 "compare_and_write": false, 00:09:16.116 "abort": true, 00:09:16.116 "seek_hole": false, 00:09:16.116 "seek_data": false, 00:09:16.116 "copy": true, 00:09:16.116 "nvme_iov_md": false 00:09:16.116 }, 00:09:16.116 "memory_domains": [ 00:09:16.116 { 00:09:16.116 "dma_device_id": "system", 00:09:16.116 "dma_device_type": 1 00:09:16.116 }, 00:09:16.116 { 00:09:16.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.116 "dma_device_type": 2 00:09:16.116 } 00:09:16.116 ], 00:09:16.116 "driver_specific": {} 00:09:16.116 } 00:09:16.116 ] 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.116 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.116 "name": "Existed_Raid", 00:09:16.116 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:16.116 "strip_size_kb": 64, 00:09:16.116 "state": "online", 00:09:16.116 "raid_level": "raid0", 00:09:16.116 "superblock": true, 00:09:16.116 "num_base_bdevs": 4, 00:09:16.116 "num_base_bdevs_discovered": 4, 00:09:16.116 "num_base_bdevs_operational": 4, 00:09:16.116 "base_bdevs_list": [ 00:09:16.116 { 00:09:16.116 "name": "BaseBdev1", 00:09:16.116 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:16.116 "is_configured": true, 00:09:16.116 "data_offset": 2048, 00:09:16.116 "data_size": 63488 00:09:16.116 }, 00:09:16.116 { 00:09:16.116 "name": "BaseBdev2", 00:09:16.116 "uuid": "c77b5137-3b7e-4cbc-b728-62d38d9ccacb", 00:09:16.116 "is_configured": true, 00:09:16.116 "data_offset": 2048, 00:09:16.116 "data_size": 63488 00:09:16.116 }, 00:09:16.116 { 00:09:16.116 "name": "BaseBdev3", 00:09:16.116 "uuid": "85bc1222-d1bc-4bf2-8768-93872d00439e", 00:09:16.116 "is_configured": true, 00:09:16.116 "data_offset": 2048, 00:09:16.116 "data_size": 63488 00:09:16.116 }, 00:09:16.116 { 00:09:16.116 "name": "BaseBdev4", 00:09:16.116 "uuid": "9dfff03c-7f4c-493f-9002-af78d6989502", 00:09:16.116 "is_configured": true, 00:09:16.116 "data_offset": 2048, 00:09:16.116 "data_size": 63488 00:09:16.116 } 00:09:16.116 ] 00:09:16.116 }' 00:09:16.117 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.117 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.416 [2024-12-07 01:53:21.701861] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.416 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:16.416 "name": "Existed_Raid", 00:09:16.416 "aliases": [ 00:09:16.416 "cae9d42d-13c1-45cc-b045-cd5e1ef982d2" 00:09:16.416 ], 00:09:16.416 "product_name": "Raid Volume", 00:09:16.416 "block_size": 512, 00:09:16.416 "num_blocks": 253952, 00:09:16.416 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:16.416 "assigned_rate_limits": { 00:09:16.416 "rw_ios_per_sec": 0, 00:09:16.416 "rw_mbytes_per_sec": 0, 00:09:16.416 "r_mbytes_per_sec": 0, 00:09:16.416 "w_mbytes_per_sec": 0 00:09:16.416 }, 00:09:16.416 "claimed": false, 00:09:16.416 "zoned": false, 00:09:16.416 "supported_io_types": { 00:09:16.416 "read": true, 00:09:16.416 "write": true, 00:09:16.416 "unmap": true, 00:09:16.416 "flush": true, 00:09:16.416 "reset": true, 00:09:16.416 "nvme_admin": false, 00:09:16.416 "nvme_io": false, 00:09:16.416 "nvme_io_md": false, 00:09:16.416 "write_zeroes": true, 00:09:16.416 "zcopy": false, 00:09:16.416 "get_zone_info": false, 00:09:16.416 "zone_management": false, 00:09:16.416 "zone_append": false, 00:09:16.416 "compare": false, 00:09:16.416 "compare_and_write": false, 00:09:16.416 "abort": false, 00:09:16.416 "seek_hole": false, 00:09:16.416 "seek_data": false, 00:09:16.416 "copy": false, 00:09:16.417 "nvme_iov_md": false 00:09:16.417 }, 00:09:16.417 "memory_domains": [ 00:09:16.417 { 00:09:16.417 "dma_device_id": "system", 00:09:16.417 "dma_device_type": 1 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.417 "dma_device_type": 2 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "system", 00:09:16.417 "dma_device_type": 1 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.417 "dma_device_type": 2 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "system", 00:09:16.417 "dma_device_type": 1 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.417 "dma_device_type": 2 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "system", 00:09:16.417 "dma_device_type": 1 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.417 "dma_device_type": 2 00:09:16.417 } 00:09:16.417 ], 00:09:16.417 "driver_specific": { 00:09:16.417 "raid": { 00:09:16.417 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:16.417 "strip_size_kb": 64, 00:09:16.417 "state": "online", 00:09:16.417 "raid_level": "raid0", 00:09:16.417 "superblock": true, 00:09:16.417 "num_base_bdevs": 4, 00:09:16.417 "num_base_bdevs_discovered": 4, 00:09:16.417 "num_base_bdevs_operational": 4, 00:09:16.417 "base_bdevs_list": [ 00:09:16.417 { 00:09:16.417 "name": "BaseBdev1", 00:09:16.417 "uuid": "8b6dcaa6-4abb-42a5-abfa-db79ef52ca8a", 00:09:16.417 "is_configured": true, 00:09:16.417 "data_offset": 2048, 00:09:16.417 "data_size": 63488 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "name": "BaseBdev2", 00:09:16.417 "uuid": "c77b5137-3b7e-4cbc-b728-62d38d9ccacb", 00:09:16.417 "is_configured": true, 00:09:16.417 "data_offset": 2048, 00:09:16.417 "data_size": 63488 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "name": "BaseBdev3", 00:09:16.417 "uuid": "85bc1222-d1bc-4bf2-8768-93872d00439e", 00:09:16.417 "is_configured": true, 00:09:16.417 "data_offset": 2048, 00:09:16.417 "data_size": 63488 00:09:16.417 }, 00:09:16.417 { 00:09:16.417 "name": "BaseBdev4", 00:09:16.417 "uuid": "9dfff03c-7f4c-493f-9002-af78d6989502", 00:09:16.417 "is_configured": true, 00:09:16.417 "data_offset": 2048, 00:09:16.417 "data_size": 63488 00:09:16.417 } 00:09:16.417 ] 00:09:16.417 } 00:09:16.417 } 00:09:16.417 }' 00:09:16.417 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:16.417 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:16.417 BaseBdev2 00:09:16.417 BaseBdev3 00:09:16.417 BaseBdev4' 00:09:16.417 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.417 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:16.417 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:16.701 01:53:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.701 [2024-12-07 01:53:22.017026] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:16.701 [2024-12-07 01:53:22.017097] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.701 [2024-12-07 01:53:22.017173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.701 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.702 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.702 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.702 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.702 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.702 "name": "Existed_Raid", 00:09:16.702 "uuid": "cae9d42d-13c1-45cc-b045-cd5e1ef982d2", 00:09:16.702 "strip_size_kb": 64, 00:09:16.702 "state": "offline", 00:09:16.702 "raid_level": "raid0", 00:09:16.702 "superblock": true, 00:09:16.702 "num_base_bdevs": 4, 00:09:16.702 "num_base_bdevs_discovered": 3, 00:09:16.702 "num_base_bdevs_operational": 3, 00:09:16.702 "base_bdevs_list": [ 00:09:16.702 { 00:09:16.702 "name": null, 00:09:16.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.702 "is_configured": false, 00:09:16.702 "data_offset": 0, 00:09:16.702 "data_size": 63488 00:09:16.702 }, 00:09:16.702 { 00:09:16.702 "name": "BaseBdev2", 00:09:16.702 "uuid": "c77b5137-3b7e-4cbc-b728-62d38d9ccacb", 00:09:16.702 "is_configured": true, 00:09:16.702 "data_offset": 2048, 00:09:16.702 "data_size": 63488 00:09:16.702 }, 00:09:16.702 { 00:09:16.702 "name": "BaseBdev3", 00:09:16.702 "uuid": "85bc1222-d1bc-4bf2-8768-93872d00439e", 00:09:16.702 "is_configured": true, 00:09:16.702 "data_offset": 2048, 00:09:16.702 "data_size": 63488 00:09:16.702 }, 00:09:16.702 { 00:09:16.702 "name": "BaseBdev4", 00:09:16.702 "uuid": "9dfff03c-7f4c-493f-9002-af78d6989502", 00:09:16.702 "is_configured": true, 00:09:16.702 "data_offset": 2048, 00:09:16.702 "data_size": 63488 00:09:16.702 } 00:09:16.702 ] 00:09:16.702 }' 00:09:16.702 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.702 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 [2024-12-07 01:53:22.503433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 [2024-12-07 01:53:22.570324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 [2024-12-07 01:53:22.637381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:17.273 [2024-12-07 01:53:22.637464] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 BaseBdev2 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.273 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.533 [ 00:09:17.534 { 00:09:17.534 "name": "BaseBdev2", 00:09:17.534 "aliases": [ 00:09:17.534 "ce2fa110-2276-43c4-bcce-60584a666fea" 00:09:17.534 ], 00:09:17.534 "product_name": "Malloc disk", 00:09:17.534 "block_size": 512, 00:09:17.534 "num_blocks": 65536, 00:09:17.534 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:17.534 "assigned_rate_limits": { 00:09:17.534 "rw_ios_per_sec": 0, 00:09:17.534 "rw_mbytes_per_sec": 0, 00:09:17.534 "r_mbytes_per_sec": 0, 00:09:17.534 "w_mbytes_per_sec": 0 00:09:17.534 }, 00:09:17.534 "claimed": false, 00:09:17.534 "zoned": false, 00:09:17.534 "supported_io_types": { 00:09:17.534 "read": true, 00:09:17.534 "write": true, 00:09:17.534 "unmap": true, 00:09:17.534 "flush": true, 00:09:17.534 "reset": true, 00:09:17.534 "nvme_admin": false, 00:09:17.534 "nvme_io": false, 00:09:17.534 "nvme_io_md": false, 00:09:17.534 "write_zeroes": true, 00:09:17.534 "zcopy": true, 00:09:17.534 "get_zone_info": false, 00:09:17.534 "zone_management": false, 00:09:17.534 "zone_append": false, 00:09:17.534 "compare": false, 00:09:17.534 "compare_and_write": false, 00:09:17.534 "abort": true, 00:09:17.534 "seek_hole": false, 00:09:17.534 "seek_data": false, 00:09:17.534 "copy": true, 00:09:17.534 "nvme_iov_md": false 00:09:17.534 }, 00:09:17.534 "memory_domains": [ 00:09:17.534 { 00:09:17.534 "dma_device_id": "system", 00:09:17.534 "dma_device_type": 1 00:09:17.534 }, 00:09:17.534 { 00:09:17.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.534 "dma_device_type": 2 00:09:17.534 } 00:09:17.534 ], 00:09:17.534 "driver_specific": {} 00:09:17.534 } 00:09:17.534 ] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.534 BaseBdev3 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.534 [ 00:09:17.534 { 00:09:17.534 "name": "BaseBdev3", 00:09:17.534 "aliases": [ 00:09:17.534 "87a34420-1fc0-49e0-b570-653d0f71ad81" 00:09:17.534 ], 00:09:17.534 "product_name": "Malloc disk", 00:09:17.534 "block_size": 512, 00:09:17.534 "num_blocks": 65536, 00:09:17.534 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:17.534 "assigned_rate_limits": { 00:09:17.534 "rw_ios_per_sec": 0, 00:09:17.534 "rw_mbytes_per_sec": 0, 00:09:17.534 "r_mbytes_per_sec": 0, 00:09:17.534 "w_mbytes_per_sec": 0 00:09:17.534 }, 00:09:17.534 "claimed": false, 00:09:17.534 "zoned": false, 00:09:17.534 "supported_io_types": { 00:09:17.534 "read": true, 00:09:17.534 "write": true, 00:09:17.534 "unmap": true, 00:09:17.534 "flush": true, 00:09:17.534 "reset": true, 00:09:17.534 "nvme_admin": false, 00:09:17.534 "nvme_io": false, 00:09:17.534 "nvme_io_md": false, 00:09:17.534 "write_zeroes": true, 00:09:17.534 "zcopy": true, 00:09:17.534 "get_zone_info": false, 00:09:17.534 "zone_management": false, 00:09:17.534 "zone_append": false, 00:09:17.534 "compare": false, 00:09:17.534 "compare_and_write": false, 00:09:17.534 "abort": true, 00:09:17.534 "seek_hole": false, 00:09:17.534 "seek_data": false, 00:09:17.534 "copy": true, 00:09:17.534 "nvme_iov_md": false 00:09:17.534 }, 00:09:17.534 "memory_domains": [ 00:09:17.534 { 00:09:17.534 "dma_device_id": "system", 00:09:17.534 "dma_device_type": 1 00:09:17.534 }, 00:09:17.534 { 00:09:17.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.534 "dma_device_type": 2 00:09:17.534 } 00:09:17.534 ], 00:09:17.534 "driver_specific": {} 00:09:17.534 } 00:09:17.534 ] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.534 BaseBdev4 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.534 [ 00:09:17.534 { 00:09:17.534 "name": "BaseBdev4", 00:09:17.534 "aliases": [ 00:09:17.534 "d47fa031-e200-456a-9132-131b337fe014" 00:09:17.534 ], 00:09:17.534 "product_name": "Malloc disk", 00:09:17.534 "block_size": 512, 00:09:17.534 "num_blocks": 65536, 00:09:17.534 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:17.534 "assigned_rate_limits": { 00:09:17.534 "rw_ios_per_sec": 0, 00:09:17.534 "rw_mbytes_per_sec": 0, 00:09:17.534 "r_mbytes_per_sec": 0, 00:09:17.534 "w_mbytes_per_sec": 0 00:09:17.534 }, 00:09:17.534 "claimed": false, 00:09:17.534 "zoned": false, 00:09:17.534 "supported_io_types": { 00:09:17.534 "read": true, 00:09:17.534 "write": true, 00:09:17.534 "unmap": true, 00:09:17.534 "flush": true, 00:09:17.534 "reset": true, 00:09:17.534 "nvme_admin": false, 00:09:17.534 "nvme_io": false, 00:09:17.534 "nvme_io_md": false, 00:09:17.534 "write_zeroes": true, 00:09:17.534 "zcopy": true, 00:09:17.534 "get_zone_info": false, 00:09:17.534 "zone_management": false, 00:09:17.534 "zone_append": false, 00:09:17.534 "compare": false, 00:09:17.534 "compare_and_write": false, 00:09:17.534 "abort": true, 00:09:17.534 "seek_hole": false, 00:09:17.534 "seek_data": false, 00:09:17.534 "copy": true, 00:09:17.534 "nvme_iov_md": false 00:09:17.534 }, 00:09:17.534 "memory_domains": [ 00:09:17.534 { 00:09:17.534 "dma_device_id": "system", 00:09:17.534 "dma_device_type": 1 00:09:17.534 }, 00:09:17.534 { 00:09:17.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.534 "dma_device_type": 2 00:09:17.534 } 00:09:17.534 ], 00:09:17.534 "driver_specific": {} 00:09:17.534 } 00:09:17.534 ] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:17.534 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.535 [2024-12-07 01:53:22.861513] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.535 [2024-12-07 01:53:22.861592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.535 [2024-12-07 01:53:22.861648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.535 [2024-12-07 01:53:22.863383] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.535 [2024-12-07 01:53:22.863468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.535 "name": "Existed_Raid", 00:09:17.535 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:17.535 "strip_size_kb": 64, 00:09:17.535 "state": "configuring", 00:09:17.535 "raid_level": "raid0", 00:09:17.535 "superblock": true, 00:09:17.535 "num_base_bdevs": 4, 00:09:17.535 "num_base_bdevs_discovered": 3, 00:09:17.535 "num_base_bdevs_operational": 4, 00:09:17.535 "base_bdevs_list": [ 00:09:17.535 { 00:09:17.535 "name": "BaseBdev1", 00:09:17.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.535 "is_configured": false, 00:09:17.535 "data_offset": 0, 00:09:17.535 "data_size": 0 00:09:17.535 }, 00:09:17.535 { 00:09:17.535 "name": "BaseBdev2", 00:09:17.535 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:17.535 "is_configured": true, 00:09:17.535 "data_offset": 2048, 00:09:17.535 "data_size": 63488 00:09:17.535 }, 00:09:17.535 { 00:09:17.535 "name": "BaseBdev3", 00:09:17.535 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:17.535 "is_configured": true, 00:09:17.535 "data_offset": 2048, 00:09:17.535 "data_size": 63488 00:09:17.535 }, 00:09:17.535 { 00:09:17.535 "name": "BaseBdev4", 00:09:17.535 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:17.535 "is_configured": true, 00:09:17.535 "data_offset": 2048, 00:09:17.535 "data_size": 63488 00:09:17.535 } 00:09:17.535 ] 00:09:17.535 }' 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.535 01:53:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.105 [2024-12-07 01:53:23.316712] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.105 "name": "Existed_Raid", 00:09:18.105 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:18.105 "strip_size_kb": 64, 00:09:18.105 "state": "configuring", 00:09:18.105 "raid_level": "raid0", 00:09:18.105 "superblock": true, 00:09:18.105 "num_base_bdevs": 4, 00:09:18.105 "num_base_bdevs_discovered": 2, 00:09:18.105 "num_base_bdevs_operational": 4, 00:09:18.105 "base_bdevs_list": [ 00:09:18.105 { 00:09:18.105 "name": "BaseBdev1", 00:09:18.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.105 "is_configured": false, 00:09:18.105 "data_offset": 0, 00:09:18.105 "data_size": 0 00:09:18.105 }, 00:09:18.105 { 00:09:18.105 "name": null, 00:09:18.105 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:18.105 "is_configured": false, 00:09:18.105 "data_offset": 0, 00:09:18.105 "data_size": 63488 00:09:18.105 }, 00:09:18.105 { 00:09:18.105 "name": "BaseBdev3", 00:09:18.105 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:18.105 "is_configured": true, 00:09:18.105 "data_offset": 2048, 00:09:18.105 "data_size": 63488 00:09:18.105 }, 00:09:18.105 { 00:09:18.105 "name": "BaseBdev4", 00:09:18.105 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:18.105 "is_configured": true, 00:09:18.105 "data_offset": 2048, 00:09:18.105 "data_size": 63488 00:09:18.105 } 00:09:18.105 ] 00:09:18.105 }' 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.105 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 [2024-12-07 01:53:23.810656] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.365 BaseBdev1 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.365 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.624 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.624 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.624 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.624 [ 00:09:18.624 { 00:09:18.624 "name": "BaseBdev1", 00:09:18.624 "aliases": [ 00:09:18.625 "5f9b2d57-fcae-4f43-95fa-b393cbf4998c" 00:09:18.625 ], 00:09:18.625 "product_name": "Malloc disk", 00:09:18.625 "block_size": 512, 00:09:18.625 "num_blocks": 65536, 00:09:18.625 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:18.625 "assigned_rate_limits": { 00:09:18.625 "rw_ios_per_sec": 0, 00:09:18.625 "rw_mbytes_per_sec": 0, 00:09:18.625 "r_mbytes_per_sec": 0, 00:09:18.625 "w_mbytes_per_sec": 0 00:09:18.625 }, 00:09:18.625 "claimed": true, 00:09:18.625 "claim_type": "exclusive_write", 00:09:18.625 "zoned": false, 00:09:18.625 "supported_io_types": { 00:09:18.625 "read": true, 00:09:18.625 "write": true, 00:09:18.625 "unmap": true, 00:09:18.625 "flush": true, 00:09:18.625 "reset": true, 00:09:18.625 "nvme_admin": false, 00:09:18.625 "nvme_io": false, 00:09:18.625 "nvme_io_md": false, 00:09:18.625 "write_zeroes": true, 00:09:18.625 "zcopy": true, 00:09:18.625 "get_zone_info": false, 00:09:18.625 "zone_management": false, 00:09:18.625 "zone_append": false, 00:09:18.625 "compare": false, 00:09:18.625 "compare_and_write": false, 00:09:18.625 "abort": true, 00:09:18.625 "seek_hole": false, 00:09:18.625 "seek_data": false, 00:09:18.625 "copy": true, 00:09:18.625 "nvme_iov_md": false 00:09:18.625 }, 00:09:18.625 "memory_domains": [ 00:09:18.625 { 00:09:18.625 "dma_device_id": "system", 00:09:18.625 "dma_device_type": 1 00:09:18.625 }, 00:09:18.625 { 00:09:18.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.625 "dma_device_type": 2 00:09:18.625 } 00:09:18.625 ], 00:09:18.625 "driver_specific": {} 00:09:18.625 } 00:09:18.625 ] 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.625 "name": "Existed_Raid", 00:09:18.625 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:18.625 "strip_size_kb": 64, 00:09:18.625 "state": "configuring", 00:09:18.625 "raid_level": "raid0", 00:09:18.625 "superblock": true, 00:09:18.625 "num_base_bdevs": 4, 00:09:18.625 "num_base_bdevs_discovered": 3, 00:09:18.625 "num_base_bdevs_operational": 4, 00:09:18.625 "base_bdevs_list": [ 00:09:18.625 { 00:09:18.625 "name": "BaseBdev1", 00:09:18.625 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:18.625 "is_configured": true, 00:09:18.625 "data_offset": 2048, 00:09:18.625 "data_size": 63488 00:09:18.625 }, 00:09:18.625 { 00:09:18.625 "name": null, 00:09:18.625 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:18.625 "is_configured": false, 00:09:18.625 "data_offset": 0, 00:09:18.625 "data_size": 63488 00:09:18.625 }, 00:09:18.625 { 00:09:18.625 "name": "BaseBdev3", 00:09:18.625 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:18.625 "is_configured": true, 00:09:18.625 "data_offset": 2048, 00:09:18.625 "data_size": 63488 00:09:18.625 }, 00:09:18.625 { 00:09:18.625 "name": "BaseBdev4", 00:09:18.625 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:18.625 "is_configured": true, 00:09:18.625 "data_offset": 2048, 00:09:18.625 "data_size": 63488 00:09:18.625 } 00:09:18.625 ] 00:09:18.625 }' 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.625 01:53:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.884 [2024-12-07 01:53:24.329800] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.884 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.143 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.143 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.143 "name": "Existed_Raid", 00:09:19.143 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:19.143 "strip_size_kb": 64, 00:09:19.143 "state": "configuring", 00:09:19.143 "raid_level": "raid0", 00:09:19.143 "superblock": true, 00:09:19.143 "num_base_bdevs": 4, 00:09:19.143 "num_base_bdevs_discovered": 2, 00:09:19.143 "num_base_bdevs_operational": 4, 00:09:19.143 "base_bdevs_list": [ 00:09:19.143 { 00:09:19.143 "name": "BaseBdev1", 00:09:19.143 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:19.143 "is_configured": true, 00:09:19.143 "data_offset": 2048, 00:09:19.144 "data_size": 63488 00:09:19.144 }, 00:09:19.144 { 00:09:19.144 "name": null, 00:09:19.144 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:19.144 "is_configured": false, 00:09:19.144 "data_offset": 0, 00:09:19.144 "data_size": 63488 00:09:19.144 }, 00:09:19.144 { 00:09:19.144 "name": null, 00:09:19.144 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:19.144 "is_configured": false, 00:09:19.144 "data_offset": 0, 00:09:19.144 "data_size": 63488 00:09:19.144 }, 00:09:19.144 { 00:09:19.144 "name": "BaseBdev4", 00:09:19.144 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:19.144 "is_configured": true, 00:09:19.144 "data_offset": 2048, 00:09:19.144 "data_size": 63488 00:09:19.144 } 00:09:19.144 ] 00:09:19.144 }' 00:09:19.144 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.144 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.403 [2024-12-07 01:53:24.809037] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:19.403 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.404 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.663 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.663 "name": "Existed_Raid", 00:09:19.663 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:19.663 "strip_size_kb": 64, 00:09:19.663 "state": "configuring", 00:09:19.663 "raid_level": "raid0", 00:09:19.663 "superblock": true, 00:09:19.663 "num_base_bdevs": 4, 00:09:19.663 "num_base_bdevs_discovered": 3, 00:09:19.663 "num_base_bdevs_operational": 4, 00:09:19.663 "base_bdevs_list": [ 00:09:19.663 { 00:09:19.663 "name": "BaseBdev1", 00:09:19.663 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:19.663 "is_configured": true, 00:09:19.663 "data_offset": 2048, 00:09:19.663 "data_size": 63488 00:09:19.663 }, 00:09:19.663 { 00:09:19.663 "name": null, 00:09:19.663 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:19.663 "is_configured": false, 00:09:19.663 "data_offset": 0, 00:09:19.663 "data_size": 63488 00:09:19.663 }, 00:09:19.663 { 00:09:19.663 "name": "BaseBdev3", 00:09:19.663 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:19.663 "is_configured": true, 00:09:19.663 "data_offset": 2048, 00:09:19.663 "data_size": 63488 00:09:19.663 }, 00:09:19.663 { 00:09:19.663 "name": "BaseBdev4", 00:09:19.663 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:19.663 "is_configured": true, 00:09:19.663 "data_offset": 2048, 00:09:19.663 "data_size": 63488 00:09:19.663 } 00:09:19.663 ] 00:09:19.663 }' 00:09:19.663 01:53:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.663 01:53:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.924 [2024-12-07 01:53:25.284247] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.924 "name": "Existed_Raid", 00:09:19.924 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:19.924 "strip_size_kb": 64, 00:09:19.924 "state": "configuring", 00:09:19.924 "raid_level": "raid0", 00:09:19.924 "superblock": true, 00:09:19.924 "num_base_bdevs": 4, 00:09:19.924 "num_base_bdevs_discovered": 2, 00:09:19.924 "num_base_bdevs_operational": 4, 00:09:19.924 "base_bdevs_list": [ 00:09:19.924 { 00:09:19.924 "name": null, 00:09:19.924 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:19.924 "is_configured": false, 00:09:19.924 "data_offset": 0, 00:09:19.924 "data_size": 63488 00:09:19.924 }, 00:09:19.924 { 00:09:19.924 "name": null, 00:09:19.924 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:19.924 "is_configured": false, 00:09:19.924 "data_offset": 0, 00:09:19.924 "data_size": 63488 00:09:19.924 }, 00:09:19.924 { 00:09:19.924 "name": "BaseBdev3", 00:09:19.924 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:19.924 "is_configured": true, 00:09:19.924 "data_offset": 2048, 00:09:19.924 "data_size": 63488 00:09:19.924 }, 00:09:19.924 { 00:09:19.924 "name": "BaseBdev4", 00:09:19.924 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:19.924 "is_configured": true, 00:09:19.924 "data_offset": 2048, 00:09:19.924 "data_size": 63488 00:09:19.924 } 00:09:19.924 ] 00:09:19.924 }' 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.924 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.493 [2024-12-07 01:53:25.789681] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.493 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.493 "name": "Existed_Raid", 00:09:20.493 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:20.493 "strip_size_kb": 64, 00:09:20.493 "state": "configuring", 00:09:20.493 "raid_level": "raid0", 00:09:20.493 "superblock": true, 00:09:20.493 "num_base_bdevs": 4, 00:09:20.493 "num_base_bdevs_discovered": 3, 00:09:20.493 "num_base_bdevs_operational": 4, 00:09:20.493 "base_bdevs_list": [ 00:09:20.493 { 00:09:20.493 "name": null, 00:09:20.493 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:20.493 "is_configured": false, 00:09:20.493 "data_offset": 0, 00:09:20.493 "data_size": 63488 00:09:20.493 }, 00:09:20.493 { 00:09:20.493 "name": "BaseBdev2", 00:09:20.493 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:20.493 "is_configured": true, 00:09:20.493 "data_offset": 2048, 00:09:20.493 "data_size": 63488 00:09:20.493 }, 00:09:20.493 { 00:09:20.493 "name": "BaseBdev3", 00:09:20.493 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:20.493 "is_configured": true, 00:09:20.494 "data_offset": 2048, 00:09:20.494 "data_size": 63488 00:09:20.494 }, 00:09:20.494 { 00:09:20.494 "name": "BaseBdev4", 00:09:20.494 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:20.494 "is_configured": true, 00:09:20.494 "data_offset": 2048, 00:09:20.494 "data_size": 63488 00:09:20.494 } 00:09:20.494 ] 00:09:20.494 }' 00:09:20.494 01:53:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.494 01:53:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5f9b2d57-fcae-4f43-95fa-b393cbf4998c 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 [2024-12-07 01:53:26.323490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:21.062 [2024-12-07 01:53:26.323746] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:21.062 [2024-12-07 01:53:26.323794] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:21.062 [2024-12-07 01:53:26.324064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:21.062 NewBaseBdev 00:09:21.062 [2024-12-07 01:53:26.324211] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:21.062 [2024-12-07 01:53:26.324250] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:21.062 [2024-12-07 01:53:26.324375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 [ 00:09:21.062 { 00:09:21.062 "name": "NewBaseBdev", 00:09:21.062 "aliases": [ 00:09:21.062 "5f9b2d57-fcae-4f43-95fa-b393cbf4998c" 00:09:21.062 ], 00:09:21.062 "product_name": "Malloc disk", 00:09:21.062 "block_size": 512, 00:09:21.062 "num_blocks": 65536, 00:09:21.062 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:21.062 "assigned_rate_limits": { 00:09:21.062 "rw_ios_per_sec": 0, 00:09:21.062 "rw_mbytes_per_sec": 0, 00:09:21.062 "r_mbytes_per_sec": 0, 00:09:21.062 "w_mbytes_per_sec": 0 00:09:21.062 }, 00:09:21.062 "claimed": true, 00:09:21.062 "claim_type": "exclusive_write", 00:09:21.062 "zoned": false, 00:09:21.062 "supported_io_types": { 00:09:21.062 "read": true, 00:09:21.062 "write": true, 00:09:21.062 "unmap": true, 00:09:21.062 "flush": true, 00:09:21.062 "reset": true, 00:09:21.062 "nvme_admin": false, 00:09:21.062 "nvme_io": false, 00:09:21.062 "nvme_io_md": false, 00:09:21.062 "write_zeroes": true, 00:09:21.062 "zcopy": true, 00:09:21.062 "get_zone_info": false, 00:09:21.062 "zone_management": false, 00:09:21.062 "zone_append": false, 00:09:21.062 "compare": false, 00:09:21.062 "compare_and_write": false, 00:09:21.062 "abort": true, 00:09:21.062 "seek_hole": false, 00:09:21.062 "seek_data": false, 00:09:21.062 "copy": true, 00:09:21.062 "nvme_iov_md": false 00:09:21.062 }, 00:09:21.062 "memory_domains": [ 00:09:21.062 { 00:09:21.062 "dma_device_id": "system", 00:09:21.062 "dma_device_type": 1 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.062 "dma_device_type": 2 00:09:21.062 } 00:09:21.062 ], 00:09:21.062 "driver_specific": {} 00:09:21.062 } 00:09:21.062 ] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.062 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.062 "name": "Existed_Raid", 00:09:21.062 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:21.062 "strip_size_kb": 64, 00:09:21.062 "state": "online", 00:09:21.062 "raid_level": "raid0", 00:09:21.062 "superblock": true, 00:09:21.062 "num_base_bdevs": 4, 00:09:21.062 "num_base_bdevs_discovered": 4, 00:09:21.062 "num_base_bdevs_operational": 4, 00:09:21.062 "base_bdevs_list": [ 00:09:21.062 { 00:09:21.062 "name": "NewBaseBdev", 00:09:21.062 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:21.062 "is_configured": true, 00:09:21.062 "data_offset": 2048, 00:09:21.062 "data_size": 63488 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "name": "BaseBdev2", 00:09:21.062 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:21.062 "is_configured": true, 00:09:21.062 "data_offset": 2048, 00:09:21.062 "data_size": 63488 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "name": "BaseBdev3", 00:09:21.062 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:21.062 "is_configured": true, 00:09:21.062 "data_offset": 2048, 00:09:21.062 "data_size": 63488 00:09:21.062 }, 00:09:21.062 { 00:09:21.062 "name": "BaseBdev4", 00:09:21.062 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:21.062 "is_configured": true, 00:09:21.062 "data_offset": 2048, 00:09:21.063 "data_size": 63488 00:09:21.063 } 00:09:21.063 ] 00:09:21.063 }' 00:09:21.063 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.063 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.323 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.584 [2024-12-07 01:53:26.787065] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.584 "name": "Existed_Raid", 00:09:21.584 "aliases": [ 00:09:21.584 "3792d10e-a571-4785-bb85-a0eced1194a0" 00:09:21.584 ], 00:09:21.584 "product_name": "Raid Volume", 00:09:21.584 "block_size": 512, 00:09:21.584 "num_blocks": 253952, 00:09:21.584 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:21.584 "assigned_rate_limits": { 00:09:21.584 "rw_ios_per_sec": 0, 00:09:21.584 "rw_mbytes_per_sec": 0, 00:09:21.584 "r_mbytes_per_sec": 0, 00:09:21.584 "w_mbytes_per_sec": 0 00:09:21.584 }, 00:09:21.584 "claimed": false, 00:09:21.584 "zoned": false, 00:09:21.584 "supported_io_types": { 00:09:21.584 "read": true, 00:09:21.584 "write": true, 00:09:21.584 "unmap": true, 00:09:21.584 "flush": true, 00:09:21.584 "reset": true, 00:09:21.584 "nvme_admin": false, 00:09:21.584 "nvme_io": false, 00:09:21.584 "nvme_io_md": false, 00:09:21.584 "write_zeroes": true, 00:09:21.584 "zcopy": false, 00:09:21.584 "get_zone_info": false, 00:09:21.584 "zone_management": false, 00:09:21.584 "zone_append": false, 00:09:21.584 "compare": false, 00:09:21.584 "compare_and_write": false, 00:09:21.584 "abort": false, 00:09:21.584 "seek_hole": false, 00:09:21.584 "seek_data": false, 00:09:21.584 "copy": false, 00:09:21.584 "nvme_iov_md": false 00:09:21.584 }, 00:09:21.584 "memory_domains": [ 00:09:21.584 { 00:09:21.584 "dma_device_id": "system", 00:09:21.584 "dma_device_type": 1 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.584 "dma_device_type": 2 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "system", 00:09:21.584 "dma_device_type": 1 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.584 "dma_device_type": 2 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "system", 00:09:21.584 "dma_device_type": 1 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.584 "dma_device_type": 2 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "system", 00:09:21.584 "dma_device_type": 1 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.584 "dma_device_type": 2 00:09:21.584 } 00:09:21.584 ], 00:09:21.584 "driver_specific": { 00:09:21.584 "raid": { 00:09:21.584 "uuid": "3792d10e-a571-4785-bb85-a0eced1194a0", 00:09:21.584 "strip_size_kb": 64, 00:09:21.584 "state": "online", 00:09:21.584 "raid_level": "raid0", 00:09:21.584 "superblock": true, 00:09:21.584 "num_base_bdevs": 4, 00:09:21.584 "num_base_bdevs_discovered": 4, 00:09:21.584 "num_base_bdevs_operational": 4, 00:09:21.584 "base_bdevs_list": [ 00:09:21.584 { 00:09:21.584 "name": "NewBaseBdev", 00:09:21.584 "uuid": "5f9b2d57-fcae-4f43-95fa-b393cbf4998c", 00:09:21.584 "is_configured": true, 00:09:21.584 "data_offset": 2048, 00:09:21.584 "data_size": 63488 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "name": "BaseBdev2", 00:09:21.584 "uuid": "ce2fa110-2276-43c4-bcce-60584a666fea", 00:09:21.584 "is_configured": true, 00:09:21.584 "data_offset": 2048, 00:09:21.584 "data_size": 63488 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "name": "BaseBdev3", 00:09:21.584 "uuid": "87a34420-1fc0-49e0-b570-653d0f71ad81", 00:09:21.584 "is_configured": true, 00:09:21.584 "data_offset": 2048, 00:09:21.584 "data_size": 63488 00:09:21.584 }, 00:09:21.584 { 00:09:21.584 "name": "BaseBdev4", 00:09:21.584 "uuid": "d47fa031-e200-456a-9132-131b337fe014", 00:09:21.584 "is_configured": true, 00:09:21.584 "data_offset": 2048, 00:09:21.584 "data_size": 63488 00:09:21.584 } 00:09:21.584 ] 00:09:21.584 } 00:09:21.584 } 00:09:21.584 }' 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:21.584 BaseBdev2 00:09:21.584 BaseBdev3 00:09:21.584 BaseBdev4' 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.584 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.585 01:53:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.585 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.846 [2024-12-07 01:53:27.114181] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:21.846 [2024-12-07 01:53:27.114245] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.846 [2024-12-07 01:53:27.114357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.846 [2024-12-07 01:53:27.114434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.846 [2024-12-07 01:53:27.114475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80732 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80732 ']' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80732 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80732 00:09:21.846 killing process with pid 80732 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80732' 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80732 00:09:21.846 [2024-12-07 01:53:27.163202] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:21.846 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80732 00:09:21.846 [2024-12-07 01:53:27.204128] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.106 01:53:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.106 00:09:22.106 real 0m9.501s 00:09:22.106 user 0m16.313s 00:09:22.106 sys 0m1.901s 00:09:22.106 ************************************ 00:09:22.106 END TEST raid_state_function_test_sb 00:09:22.106 ************************************ 00:09:22.106 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.106 01:53:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.106 01:53:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:22.106 01:53:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:22.106 01:53:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.106 01:53:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.106 ************************************ 00:09:22.106 START TEST raid_superblock_test 00:09:22.106 ************************************ 00:09:22.106 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:09:22.106 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:22.106 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81381 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81381 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81381 ']' 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.107 01:53:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.366 [2024-12-07 01:53:27.602608] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:22.366 [2024-12-07 01:53:27.602756] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81381 ] 00:09:22.366 [2024-12-07 01:53:27.748536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.366 [2024-12-07 01:53:27.792382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.626 [2024-12-07 01:53:27.833586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.626 [2024-12-07 01:53:27.833622] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.195 malloc1 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.195 [2024-12-07 01:53:28.443574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:23.195 [2024-12-07 01:53:28.443714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.195 [2024-12-07 01:53:28.443756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:23.195 [2024-12-07 01:53:28.443791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.195 [2024-12-07 01:53:28.445860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.195 [2024-12-07 01:53:28.445945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:23.195 pt1 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.195 malloc2 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.195 [2024-12-07 01:53:28.490088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:23.195 [2024-12-07 01:53:28.490336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.195 [2024-12-07 01:53:28.490441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:23.195 [2024-12-07 01:53:28.490542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.195 [2024-12-07 01:53:28.495363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.195 [2024-12-07 01:53:28.495493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:23.195 pt2 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.195 malloc3 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.195 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.195 [2024-12-07 01:53:28.521519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:23.195 [2024-12-07 01:53:28.521612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.195 [2024-12-07 01:53:28.521647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:23.195 [2024-12-07 01:53:28.521711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.195 [2024-12-07 01:53:28.523768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.195 [2024-12-07 01:53:28.523838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:23.195 pt3 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.196 malloc4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.196 [2024-12-07 01:53:28.553894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:23.196 [2024-12-07 01:53:28.554000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:23.196 [2024-12-07 01:53:28.554035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:23.196 [2024-12-07 01:53:28.554066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:23.196 [2024-12-07 01:53:28.556189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:23.196 [2024-12-07 01:53:28.556261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:23.196 pt4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.196 [2024-12-07 01:53:28.565925] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:23.196 [2024-12-07 01:53:28.567836] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:23.196 [2024-12-07 01:53:28.567943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:23.196 [2024-12-07 01:53:28.568008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:23.196 [2024-12-07 01:53:28.568164] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:23.196 [2024-12-07 01:53:28.568185] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:23.196 [2024-12-07 01:53:28.568433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:23.196 [2024-12-07 01:53:28.568569] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:23.196 [2024-12-07 01:53:28.568588] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:23.196 [2024-12-07 01:53:28.568727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.196 "name": "raid_bdev1", 00:09:23.196 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:23.196 "strip_size_kb": 64, 00:09:23.196 "state": "online", 00:09:23.196 "raid_level": "raid0", 00:09:23.196 "superblock": true, 00:09:23.196 "num_base_bdevs": 4, 00:09:23.196 "num_base_bdevs_discovered": 4, 00:09:23.196 "num_base_bdevs_operational": 4, 00:09:23.196 "base_bdevs_list": [ 00:09:23.196 { 00:09:23.196 "name": "pt1", 00:09:23.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.196 "is_configured": true, 00:09:23.196 "data_offset": 2048, 00:09:23.196 "data_size": 63488 00:09:23.196 }, 00:09:23.196 { 00:09:23.196 "name": "pt2", 00:09:23.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.196 "is_configured": true, 00:09:23.196 "data_offset": 2048, 00:09:23.196 "data_size": 63488 00:09:23.196 }, 00:09:23.196 { 00:09:23.196 "name": "pt3", 00:09:23.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.196 "is_configured": true, 00:09:23.196 "data_offset": 2048, 00:09:23.196 "data_size": 63488 00:09:23.196 }, 00:09:23.196 { 00:09:23.196 "name": "pt4", 00:09:23.196 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:23.196 "is_configured": true, 00:09:23.196 "data_offset": 2048, 00:09:23.196 "data_size": 63488 00:09:23.196 } 00:09:23.196 ] 00:09:23.196 }' 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.196 01:53:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.766 [2024-12-07 01:53:29.073373] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.766 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:23.766 "name": "raid_bdev1", 00:09:23.766 "aliases": [ 00:09:23.766 "709708cc-19cd-4707-82a1-0e1b24722a0c" 00:09:23.766 ], 00:09:23.766 "product_name": "Raid Volume", 00:09:23.766 "block_size": 512, 00:09:23.766 "num_blocks": 253952, 00:09:23.766 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:23.766 "assigned_rate_limits": { 00:09:23.766 "rw_ios_per_sec": 0, 00:09:23.766 "rw_mbytes_per_sec": 0, 00:09:23.766 "r_mbytes_per_sec": 0, 00:09:23.766 "w_mbytes_per_sec": 0 00:09:23.766 }, 00:09:23.766 "claimed": false, 00:09:23.766 "zoned": false, 00:09:23.766 "supported_io_types": { 00:09:23.766 "read": true, 00:09:23.766 "write": true, 00:09:23.766 "unmap": true, 00:09:23.766 "flush": true, 00:09:23.766 "reset": true, 00:09:23.766 "nvme_admin": false, 00:09:23.766 "nvme_io": false, 00:09:23.766 "nvme_io_md": false, 00:09:23.766 "write_zeroes": true, 00:09:23.766 "zcopy": false, 00:09:23.766 "get_zone_info": false, 00:09:23.766 "zone_management": false, 00:09:23.766 "zone_append": false, 00:09:23.766 "compare": false, 00:09:23.766 "compare_and_write": false, 00:09:23.766 "abort": false, 00:09:23.766 "seek_hole": false, 00:09:23.766 "seek_data": false, 00:09:23.766 "copy": false, 00:09:23.766 "nvme_iov_md": false 00:09:23.766 }, 00:09:23.766 "memory_domains": [ 00:09:23.766 { 00:09:23.766 "dma_device_id": "system", 00:09:23.766 "dma_device_type": 1 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.766 "dma_device_type": 2 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "system", 00:09:23.766 "dma_device_type": 1 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.766 "dma_device_type": 2 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "system", 00:09:23.766 "dma_device_type": 1 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.766 "dma_device_type": 2 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "system", 00:09:23.766 "dma_device_type": 1 00:09:23.766 }, 00:09:23.766 { 00:09:23.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.767 "dma_device_type": 2 00:09:23.767 } 00:09:23.767 ], 00:09:23.767 "driver_specific": { 00:09:23.767 "raid": { 00:09:23.767 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:23.767 "strip_size_kb": 64, 00:09:23.767 "state": "online", 00:09:23.767 "raid_level": "raid0", 00:09:23.767 "superblock": true, 00:09:23.767 "num_base_bdevs": 4, 00:09:23.767 "num_base_bdevs_discovered": 4, 00:09:23.767 "num_base_bdevs_operational": 4, 00:09:23.767 "base_bdevs_list": [ 00:09:23.767 { 00:09:23.767 "name": "pt1", 00:09:23.767 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:23.767 "is_configured": true, 00:09:23.767 "data_offset": 2048, 00:09:23.767 "data_size": 63488 00:09:23.767 }, 00:09:23.767 { 00:09:23.767 "name": "pt2", 00:09:23.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:23.767 "is_configured": true, 00:09:23.767 "data_offset": 2048, 00:09:23.767 "data_size": 63488 00:09:23.767 }, 00:09:23.767 { 00:09:23.767 "name": "pt3", 00:09:23.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:23.767 "is_configured": true, 00:09:23.767 "data_offset": 2048, 00:09:23.767 "data_size": 63488 00:09:23.767 }, 00:09:23.767 { 00:09:23.767 "name": "pt4", 00:09:23.767 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:23.767 "is_configured": true, 00:09:23.767 "data_offset": 2048, 00:09:23.767 "data_size": 63488 00:09:23.767 } 00:09:23.767 ] 00:09:23.767 } 00:09:23.767 } 00:09:23.767 }' 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:23.767 pt2 00:09:23.767 pt3 00:09:23.767 pt4' 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.767 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 [2024-12-07 01:53:29.392728] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=709708cc-19cd-4707-82a1-0e1b24722a0c 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 709708cc-19cd-4707-82a1-0e1b24722a0c ']' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 [2024-12-07 01:53:29.436368] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.026 [2024-12-07 01:53:29.436435] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.026 [2024-12-07 01:53:29.436517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.026 [2024-12-07 01:53:29.436598] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:24.026 [2024-12-07 01:53:29.436649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.026 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 [2024-12-07 01:53:29.600141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:24.332 [2024-12-07 01:53:29.602288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:24.332 [2024-12-07 01:53:29.602387] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:24.332 [2024-12-07 01:53:29.602451] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:24.332 [2024-12-07 01:53:29.602527] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:24.332 [2024-12-07 01:53:29.602617] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:24.332 [2024-12-07 01:53:29.602691] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:24.332 [2024-12-07 01:53:29.602769] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:24.332 [2024-12-07 01:53:29.602828] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:24.332 [2024-12-07 01:53:29.602860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:24.332 request: 00:09:24.332 { 00:09:24.332 "name": "raid_bdev1", 00:09:24.332 "raid_level": "raid0", 00:09:24.332 "base_bdevs": [ 00:09:24.332 "malloc1", 00:09:24.332 "malloc2", 00:09:24.332 "malloc3", 00:09:24.332 "malloc4" 00:09:24.332 ], 00:09:24.332 "strip_size_kb": 64, 00:09:24.332 "superblock": false, 00:09:24.332 "method": "bdev_raid_create", 00:09:24.332 "req_id": 1 00:09:24.332 } 00:09:24.332 Got JSON-RPC error response 00:09:24.332 response: 00:09:24.332 { 00:09:24.332 "code": -17, 00:09:24.332 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:24.332 } 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.332 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.333 [2024-12-07 01:53:29.663961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:24.333 [2024-12-07 01:53:29.664040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.333 [2024-12-07 01:53:29.664078] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:24.333 [2024-12-07 01:53:29.664104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.333 [2024-12-07 01:53:29.666245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.333 [2024-12-07 01:53:29.666306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:24.333 [2024-12-07 01:53:29.666410] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:24.333 [2024-12-07 01:53:29.666486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:24.333 pt1 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.333 "name": "raid_bdev1", 00:09:24.333 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:24.333 "strip_size_kb": 64, 00:09:24.333 "state": "configuring", 00:09:24.333 "raid_level": "raid0", 00:09:24.333 "superblock": true, 00:09:24.333 "num_base_bdevs": 4, 00:09:24.333 "num_base_bdevs_discovered": 1, 00:09:24.333 "num_base_bdevs_operational": 4, 00:09:24.333 "base_bdevs_list": [ 00:09:24.333 { 00:09:24.333 "name": "pt1", 00:09:24.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.333 "is_configured": true, 00:09:24.333 "data_offset": 2048, 00:09:24.333 "data_size": 63488 00:09:24.333 }, 00:09:24.333 { 00:09:24.333 "name": null, 00:09:24.333 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.333 "is_configured": false, 00:09:24.333 "data_offset": 2048, 00:09:24.333 "data_size": 63488 00:09:24.333 }, 00:09:24.333 { 00:09:24.333 "name": null, 00:09:24.333 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.333 "is_configured": false, 00:09:24.333 "data_offset": 2048, 00:09:24.333 "data_size": 63488 00:09:24.333 }, 00:09:24.333 { 00:09:24.333 "name": null, 00:09:24.333 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:24.333 "is_configured": false, 00:09:24.333 "data_offset": 2048, 00:09:24.333 "data_size": 63488 00:09:24.333 } 00:09:24.333 ] 00:09:24.333 }' 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.333 01:53:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.902 [2024-12-07 01:53:30.083269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:24.902 [2024-12-07 01:53:30.083368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.902 [2024-12-07 01:53:30.083405] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:24.902 [2024-12-07 01:53:30.083475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.902 [2024-12-07 01:53:30.083902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.902 [2024-12-07 01:53:30.083957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:24.902 [2024-12-07 01:53:30.084060] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:24.902 [2024-12-07 01:53:30.084119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:24.902 pt2 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.902 [2024-12-07 01:53:30.091266] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.902 "name": "raid_bdev1", 00:09:24.902 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:24.902 "strip_size_kb": 64, 00:09:24.902 "state": "configuring", 00:09:24.902 "raid_level": "raid0", 00:09:24.902 "superblock": true, 00:09:24.902 "num_base_bdevs": 4, 00:09:24.902 "num_base_bdevs_discovered": 1, 00:09:24.902 "num_base_bdevs_operational": 4, 00:09:24.902 "base_bdevs_list": [ 00:09:24.902 { 00:09:24.902 "name": "pt1", 00:09:24.902 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:24.902 "is_configured": true, 00:09:24.902 "data_offset": 2048, 00:09:24.902 "data_size": 63488 00:09:24.902 }, 00:09:24.902 { 00:09:24.902 "name": null, 00:09:24.902 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:24.902 "is_configured": false, 00:09:24.902 "data_offset": 0, 00:09:24.902 "data_size": 63488 00:09:24.902 }, 00:09:24.902 { 00:09:24.902 "name": null, 00:09:24.902 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:24.902 "is_configured": false, 00:09:24.902 "data_offset": 2048, 00:09:24.902 "data_size": 63488 00:09:24.902 }, 00:09:24.902 { 00:09:24.902 "name": null, 00:09:24.902 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:24.902 "is_configured": false, 00:09:24.902 "data_offset": 2048, 00:09:24.902 "data_size": 63488 00:09:24.902 } 00:09:24.902 ] 00:09:24.902 }' 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.902 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.163 [2024-12-07 01:53:30.578481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:25.163 [2024-12-07 01:53:30.578595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.163 [2024-12-07 01:53:30.578627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:25.163 [2024-12-07 01:53:30.578657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.163 [2024-12-07 01:53:30.579060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.163 [2024-12-07 01:53:30.579123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:25.163 [2024-12-07 01:53:30.579222] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:25.163 [2024-12-07 01:53:30.579272] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:25.163 pt2 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.163 [2024-12-07 01:53:30.590419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:25.163 [2024-12-07 01:53:30.590469] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.163 [2024-12-07 01:53:30.590485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:25.163 [2024-12-07 01:53:30.590503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.163 [2024-12-07 01:53:30.590835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.163 [2024-12-07 01:53:30.590860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:25.163 [2024-12-07 01:53:30.590917] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:25.163 [2024-12-07 01:53:30.590944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:25.163 pt3 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.163 [2024-12-07 01:53:30.602419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:25.163 [2024-12-07 01:53:30.602501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:25.163 [2024-12-07 01:53:30.602529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:25.163 [2024-12-07 01:53:30.602556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:25.163 [2024-12-07 01:53:30.602898] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:25.163 [2024-12-07 01:53:30.602954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:25.163 [2024-12-07 01:53:30.603030] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:25.163 [2024-12-07 01:53:30.603084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:25.163 [2024-12-07 01:53:30.603205] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:25.163 [2024-12-07 01:53:30.603244] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:25.163 [2024-12-07 01:53:30.603492] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:25.163 [2024-12-07 01:53:30.603640] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:25.163 [2024-12-07 01:53:30.603691] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:25.163 [2024-12-07 01:53:30.603835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.163 pt4 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.163 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.423 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.423 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.423 "name": "raid_bdev1", 00:09:25.423 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:25.423 "strip_size_kb": 64, 00:09:25.423 "state": "online", 00:09:25.423 "raid_level": "raid0", 00:09:25.423 "superblock": true, 00:09:25.423 "num_base_bdevs": 4, 00:09:25.423 "num_base_bdevs_discovered": 4, 00:09:25.423 "num_base_bdevs_operational": 4, 00:09:25.423 "base_bdevs_list": [ 00:09:25.423 { 00:09:25.423 "name": "pt1", 00:09:25.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.423 "is_configured": true, 00:09:25.423 "data_offset": 2048, 00:09:25.423 "data_size": 63488 00:09:25.423 }, 00:09:25.423 { 00:09:25.423 "name": "pt2", 00:09:25.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.423 "is_configured": true, 00:09:25.423 "data_offset": 2048, 00:09:25.423 "data_size": 63488 00:09:25.423 }, 00:09:25.423 { 00:09:25.423 "name": "pt3", 00:09:25.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.423 "is_configured": true, 00:09:25.423 "data_offset": 2048, 00:09:25.423 "data_size": 63488 00:09:25.423 }, 00:09:25.423 { 00:09:25.423 "name": "pt4", 00:09:25.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:25.423 "is_configured": true, 00:09:25.423 "data_offset": 2048, 00:09:25.423 "data_size": 63488 00:09:25.423 } 00:09:25.423 ] 00:09:25.423 }' 00:09:25.423 01:53:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.423 01:53:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.682 [2024-12-07 01:53:31.069990] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.682 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.682 "name": "raid_bdev1", 00:09:25.682 "aliases": [ 00:09:25.682 "709708cc-19cd-4707-82a1-0e1b24722a0c" 00:09:25.682 ], 00:09:25.682 "product_name": "Raid Volume", 00:09:25.682 "block_size": 512, 00:09:25.682 "num_blocks": 253952, 00:09:25.682 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:25.682 "assigned_rate_limits": { 00:09:25.682 "rw_ios_per_sec": 0, 00:09:25.682 "rw_mbytes_per_sec": 0, 00:09:25.682 "r_mbytes_per_sec": 0, 00:09:25.682 "w_mbytes_per_sec": 0 00:09:25.682 }, 00:09:25.682 "claimed": false, 00:09:25.682 "zoned": false, 00:09:25.682 "supported_io_types": { 00:09:25.682 "read": true, 00:09:25.682 "write": true, 00:09:25.682 "unmap": true, 00:09:25.682 "flush": true, 00:09:25.682 "reset": true, 00:09:25.682 "nvme_admin": false, 00:09:25.682 "nvme_io": false, 00:09:25.682 "nvme_io_md": false, 00:09:25.682 "write_zeroes": true, 00:09:25.682 "zcopy": false, 00:09:25.682 "get_zone_info": false, 00:09:25.683 "zone_management": false, 00:09:25.683 "zone_append": false, 00:09:25.683 "compare": false, 00:09:25.683 "compare_and_write": false, 00:09:25.683 "abort": false, 00:09:25.683 "seek_hole": false, 00:09:25.683 "seek_data": false, 00:09:25.683 "copy": false, 00:09:25.683 "nvme_iov_md": false 00:09:25.683 }, 00:09:25.683 "memory_domains": [ 00:09:25.683 { 00:09:25.683 "dma_device_id": "system", 00:09:25.683 "dma_device_type": 1 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.683 "dma_device_type": 2 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "system", 00:09:25.683 "dma_device_type": 1 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.683 "dma_device_type": 2 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "system", 00:09:25.683 "dma_device_type": 1 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.683 "dma_device_type": 2 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "system", 00:09:25.683 "dma_device_type": 1 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.683 "dma_device_type": 2 00:09:25.683 } 00:09:25.683 ], 00:09:25.683 "driver_specific": { 00:09:25.683 "raid": { 00:09:25.683 "uuid": "709708cc-19cd-4707-82a1-0e1b24722a0c", 00:09:25.683 "strip_size_kb": 64, 00:09:25.683 "state": "online", 00:09:25.683 "raid_level": "raid0", 00:09:25.683 "superblock": true, 00:09:25.683 "num_base_bdevs": 4, 00:09:25.683 "num_base_bdevs_discovered": 4, 00:09:25.683 "num_base_bdevs_operational": 4, 00:09:25.683 "base_bdevs_list": [ 00:09:25.683 { 00:09:25.683 "name": "pt1", 00:09:25.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "name": "pt2", 00:09:25.683 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "name": "pt3", 00:09:25.683 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 }, 00:09:25.683 { 00:09:25.683 "name": "pt4", 00:09:25.683 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:25.683 "is_configured": true, 00:09:25.683 "data_offset": 2048, 00:09:25.683 "data_size": 63488 00:09:25.683 } 00:09:25.683 ] 00:09:25.683 } 00:09:25.683 } 00:09:25.683 }' 00:09:25.683 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:25.942 pt2 00:09:25.942 pt3 00:09:25.942 pt4' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.942 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.202 [2024-12-07 01:53:31.425324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 709708cc-19cd-4707-82a1-0e1b24722a0c '!=' 709708cc-19cd-4707-82a1-0e1b24722a0c ']' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81381 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81381 ']' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81381 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81381 00:09:26.202 killing process with pid 81381 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81381' 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 81381 00:09:26.202 [2024-12-07 01:53:31.504095] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.202 [2024-12-07 01:53:31.504180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.202 [2024-12-07 01:53:31.504244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.202 [2024-12-07 01:53:31.504255] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:26.202 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 81381 00:09:26.202 [2024-12-07 01:53:31.548097] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.462 01:53:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:26.462 00:09:26.462 real 0m4.270s 00:09:26.462 user 0m6.775s 00:09:26.462 sys 0m0.893s 00:09:26.462 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:26.462 ************************************ 00:09:26.462 END TEST raid_superblock_test 00:09:26.462 ************************************ 00:09:26.462 01:53:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.462 01:53:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:26.462 01:53:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:26.462 01:53:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:26.462 01:53:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.462 ************************************ 00:09:26.462 START TEST raid_read_error_test 00:09:26.462 ************************************ 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BWfY4vMVgm 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81634 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81634 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 81634 ']' 00:09:26.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.462 01:53:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.721 [2024-12-07 01:53:31.958255] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:26.721 [2024-12-07 01:53:31.958380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81634 ] 00:09:26.722 [2024-12-07 01:53:32.087021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.722 [2024-12-07 01:53:32.130216] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.722 [2024-12-07 01:53:32.171896] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.722 [2024-12-07 01:53:32.171927] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 BaseBdev1_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 true 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 [2024-12-07 01:53:32.821473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.660 [2024-12-07 01:53:32.821595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.660 [2024-12-07 01:53:32.821622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:27.660 [2024-12-07 01:53:32.821630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.660 [2024-12-07 01:53:32.823733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.660 [2024-12-07 01:53:32.823832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.660 BaseBdev1 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 BaseBdev2_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 true 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 [2024-12-07 01:53:32.871237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.660 [2024-12-07 01:53:32.871330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.660 [2024-12-07 01:53:32.871355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:27.660 [2024-12-07 01:53:32.871363] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.660 [2024-12-07 01:53:32.873395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.660 [2024-12-07 01:53:32.873428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.660 BaseBdev2 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 BaseBdev3_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 true 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 [2024-12-07 01:53:32.911624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:27.660 [2024-12-07 01:53:32.911719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.660 [2024-12-07 01:53:32.911759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:27.660 [2024-12-07 01:53:32.911769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.660 [2024-12-07 01:53:32.913819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.660 [2024-12-07 01:53:32.913852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:27.660 BaseBdev3 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 BaseBdev4_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.660 true 00:09:27.660 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.661 [2024-12-07 01:53:32.952001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:27.661 [2024-12-07 01:53:32.952085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.661 [2024-12-07 01:53:32.952140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:27.661 [2024-12-07 01:53:32.952168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.661 [2024-12-07 01:53:32.954272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.661 [2024-12-07 01:53:32.954352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:27.661 BaseBdev4 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.661 [2024-12-07 01:53:32.964031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.661 [2024-12-07 01:53:32.965831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.661 [2024-12-07 01:53:32.965954] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.661 [2024-12-07 01:53:32.966052] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:27.661 [2024-12-07 01:53:32.966271] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:27.661 [2024-12-07 01:53:32.966338] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:27.661 [2024-12-07 01:53:32.966588] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:27.661 [2024-12-07 01:53:32.966769] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:27.661 [2024-12-07 01:53:32.966816] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:27.661 [2024-12-07 01:53:32.966987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.661 01:53:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.661 01:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.661 "name": "raid_bdev1", 00:09:27.661 "uuid": "ae3b137d-c9f3-4394-93a3-761886596067", 00:09:27.661 "strip_size_kb": 64, 00:09:27.661 "state": "online", 00:09:27.661 "raid_level": "raid0", 00:09:27.661 "superblock": true, 00:09:27.661 "num_base_bdevs": 4, 00:09:27.661 "num_base_bdevs_discovered": 4, 00:09:27.661 "num_base_bdevs_operational": 4, 00:09:27.661 "base_bdevs_list": [ 00:09:27.661 { 00:09:27.661 "name": "BaseBdev1", 00:09:27.661 "uuid": "c15c6d40-73bb-5ad6-80bd-338e703b3f95", 00:09:27.661 "is_configured": true, 00:09:27.661 "data_offset": 2048, 00:09:27.661 "data_size": 63488 00:09:27.661 }, 00:09:27.661 { 00:09:27.661 "name": "BaseBdev2", 00:09:27.661 "uuid": "9343e0d3-1252-5037-bc57-72b7ab3250aa", 00:09:27.661 "is_configured": true, 00:09:27.661 "data_offset": 2048, 00:09:27.661 "data_size": 63488 00:09:27.661 }, 00:09:27.661 { 00:09:27.661 "name": "BaseBdev3", 00:09:27.661 "uuid": "9a198ec5-1229-565d-9263-49da025c17de", 00:09:27.661 "is_configured": true, 00:09:27.661 "data_offset": 2048, 00:09:27.661 "data_size": 63488 00:09:27.661 }, 00:09:27.661 { 00:09:27.661 "name": "BaseBdev4", 00:09:27.661 "uuid": "d386227f-d902-55d7-8f21-85369393e306", 00:09:27.661 "is_configured": true, 00:09:27.661 "data_offset": 2048, 00:09:27.661 "data_size": 63488 00:09:27.661 } 00:09:27.661 ] 00:09:27.661 }' 00:09:27.661 01:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.661 01:53:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.919 01:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:27.919 01:53:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.179 [2024-12-07 01:53:33.459594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.118 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.119 "name": "raid_bdev1", 00:09:29.119 "uuid": "ae3b137d-c9f3-4394-93a3-761886596067", 00:09:29.119 "strip_size_kb": 64, 00:09:29.119 "state": "online", 00:09:29.119 "raid_level": "raid0", 00:09:29.119 "superblock": true, 00:09:29.119 "num_base_bdevs": 4, 00:09:29.119 "num_base_bdevs_discovered": 4, 00:09:29.119 "num_base_bdevs_operational": 4, 00:09:29.119 "base_bdevs_list": [ 00:09:29.119 { 00:09:29.119 "name": "BaseBdev1", 00:09:29.119 "uuid": "c15c6d40-73bb-5ad6-80bd-338e703b3f95", 00:09:29.119 "is_configured": true, 00:09:29.119 "data_offset": 2048, 00:09:29.119 "data_size": 63488 00:09:29.119 }, 00:09:29.119 { 00:09:29.119 "name": "BaseBdev2", 00:09:29.119 "uuid": "9343e0d3-1252-5037-bc57-72b7ab3250aa", 00:09:29.119 "is_configured": true, 00:09:29.119 "data_offset": 2048, 00:09:29.119 "data_size": 63488 00:09:29.119 }, 00:09:29.119 { 00:09:29.119 "name": "BaseBdev3", 00:09:29.119 "uuid": "9a198ec5-1229-565d-9263-49da025c17de", 00:09:29.119 "is_configured": true, 00:09:29.119 "data_offset": 2048, 00:09:29.119 "data_size": 63488 00:09:29.119 }, 00:09:29.119 { 00:09:29.119 "name": "BaseBdev4", 00:09:29.119 "uuid": "d386227f-d902-55d7-8f21-85369393e306", 00:09:29.119 "is_configured": true, 00:09:29.119 "data_offset": 2048, 00:09:29.119 "data_size": 63488 00:09:29.119 } 00:09:29.119 ] 00:09:29.119 }' 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.119 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.378 [2024-12-07 01:53:34.827199] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.378 [2024-12-07 01:53:34.827276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.378 [2024-12-07 01:53:34.829827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.378 [2024-12-07 01:53:34.829948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.378 [2024-12-07 01:53:34.830014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.378 [2024-12-07 01:53:34.830057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:29.378 { 00:09:29.378 "results": [ 00:09:29.378 { 00:09:29.378 "job": "raid_bdev1", 00:09:29.378 "core_mask": "0x1", 00:09:29.378 "workload": "randrw", 00:09:29.378 "percentage": 50, 00:09:29.378 "status": "finished", 00:09:29.378 "queue_depth": 1, 00:09:29.378 "io_size": 131072, 00:09:29.378 "runtime": 1.368408, 00:09:29.378 "iops": 16942.315449778136, 00:09:29.378 "mibps": 2117.789431222267, 00:09:29.378 "io_failed": 1, 00:09:29.378 "io_timeout": 0, 00:09:29.378 "avg_latency_us": 81.82050727346866, 00:09:29.378 "min_latency_us": 24.929257641921396, 00:09:29.378 "max_latency_us": 1345.0620087336245 00:09:29.378 } 00:09:29.378 ], 00:09:29.378 "core_count": 1 00:09:29.378 } 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81634 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 81634 ']' 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 81634 00:09:29.378 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81634 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.638 killing process with pid 81634 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81634' 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 81634 00:09:29.638 [2024-12-07 01:53:34.873846] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.638 01:53:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 81634 00:09:29.638 [2024-12-07 01:53:34.909620] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BWfY4vMVgm 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:29.897 00:09:29.897 real 0m3.293s 00:09:29.897 user 0m4.128s 00:09:29.897 sys 0m0.519s 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.897 01:53:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.897 ************************************ 00:09:29.897 END TEST raid_read_error_test 00:09:29.897 ************************************ 00:09:29.897 01:53:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:29.897 01:53:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:29.897 01:53:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.897 01:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.897 ************************************ 00:09:29.897 START TEST raid_write_error_test 00:09:29.897 ************************************ 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xol1ZhIeoM 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81763 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81763 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 81763 ']' 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.897 01:53:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.897 [2024-12-07 01:53:35.319615] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:29.897 [2024-12-07 01:53:35.319733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81763 ] 00:09:30.156 [2024-12-07 01:53:35.463966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.156 [2024-12-07 01:53:35.507657] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.156 [2024-12-07 01:53:35.548734] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.156 [2024-12-07 01:53:35.548768] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.724 BaseBdev1_malloc 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.724 true 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.724 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 [2024-12-07 01:53:36.186054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:30.984 [2024-12-07 01:53:36.186177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.984 [2024-12-07 01:53:36.186226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:30.984 [2024-12-07 01:53:36.186256] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.984 [2024-12-07 01:53:36.188532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.984 [2024-12-07 01:53:36.188602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:30.984 BaseBdev1 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 BaseBdev2_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 true 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 [2024-12-07 01:53:36.236359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:30.984 [2024-12-07 01:53:36.236473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.984 [2024-12-07 01:53:36.236511] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:30.984 [2024-12-07 01:53:36.236539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.984 [2024-12-07 01:53:36.238568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.984 [2024-12-07 01:53:36.238634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:30.984 BaseBdev2 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 BaseBdev3_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 true 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 [2024-12-07 01:53:36.277106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:30.984 [2024-12-07 01:53:36.277209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.984 [2024-12-07 01:53:36.277246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:30.984 [2024-12-07 01:53:36.277272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.984 [2024-12-07 01:53:36.279298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.984 [2024-12-07 01:53:36.279365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:30.984 BaseBdev3 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 BaseBdev4_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 true 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 [2024-12-07 01:53:36.317511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:30.984 [2024-12-07 01:53:36.317594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.984 [2024-12-07 01:53:36.317647] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:30.984 [2024-12-07 01:53:36.317695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.984 [2024-12-07 01:53:36.319716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.984 [2024-12-07 01:53:36.319780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:30.984 BaseBdev4 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.984 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.984 [2024-12-07 01:53:36.329553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.984 [2024-12-07 01:53:36.331378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.985 [2024-12-07 01:53:36.331490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:30.985 [2024-12-07 01:53:36.331591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:30.985 [2024-12-07 01:53:36.331829] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:30.985 [2024-12-07 01:53:36.331878] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:30.985 [2024-12-07 01:53:36.332168] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:30.985 [2024-12-07 01:53:36.332350] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:30.985 [2024-12-07 01:53:36.332371] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:30.985 [2024-12-07 01:53:36.332498] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.985 "name": "raid_bdev1", 00:09:30.985 "uuid": "220347d7-fa63-49f1-bb36-20de120630ea", 00:09:30.985 "strip_size_kb": 64, 00:09:30.985 "state": "online", 00:09:30.985 "raid_level": "raid0", 00:09:30.985 "superblock": true, 00:09:30.985 "num_base_bdevs": 4, 00:09:30.985 "num_base_bdevs_discovered": 4, 00:09:30.985 "num_base_bdevs_operational": 4, 00:09:30.985 "base_bdevs_list": [ 00:09:30.985 { 00:09:30.985 "name": "BaseBdev1", 00:09:30.985 "uuid": "8e20b781-76e1-554f-9c3d-ee2f34ead767", 00:09:30.985 "is_configured": true, 00:09:30.985 "data_offset": 2048, 00:09:30.985 "data_size": 63488 00:09:30.985 }, 00:09:30.985 { 00:09:30.985 "name": "BaseBdev2", 00:09:30.985 "uuid": "0515d073-af2e-58b6-8eb5-4621e76d23cb", 00:09:30.985 "is_configured": true, 00:09:30.985 "data_offset": 2048, 00:09:30.985 "data_size": 63488 00:09:30.985 }, 00:09:30.985 { 00:09:30.985 "name": "BaseBdev3", 00:09:30.985 "uuid": "8227d2f6-0f33-5dfd-856a-b84fe5d7e58b", 00:09:30.985 "is_configured": true, 00:09:30.985 "data_offset": 2048, 00:09:30.985 "data_size": 63488 00:09:30.985 }, 00:09:30.985 { 00:09:30.985 "name": "BaseBdev4", 00:09:30.985 "uuid": "70d2084f-6b61-5ead-9919-6d08e511d48a", 00:09:30.985 "is_configured": true, 00:09:30.985 "data_offset": 2048, 00:09:30.985 "data_size": 63488 00:09:30.985 } 00:09:30.985 ] 00:09:30.985 }' 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.985 01:53:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.554 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:31.554 01:53:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:31.554 [2024-12-07 01:53:36.869011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.501 "name": "raid_bdev1", 00:09:32.501 "uuid": "220347d7-fa63-49f1-bb36-20de120630ea", 00:09:32.501 "strip_size_kb": 64, 00:09:32.501 "state": "online", 00:09:32.501 "raid_level": "raid0", 00:09:32.501 "superblock": true, 00:09:32.501 "num_base_bdevs": 4, 00:09:32.501 "num_base_bdevs_discovered": 4, 00:09:32.501 "num_base_bdevs_operational": 4, 00:09:32.501 "base_bdevs_list": [ 00:09:32.501 { 00:09:32.501 "name": "BaseBdev1", 00:09:32.501 "uuid": "8e20b781-76e1-554f-9c3d-ee2f34ead767", 00:09:32.501 "is_configured": true, 00:09:32.501 "data_offset": 2048, 00:09:32.501 "data_size": 63488 00:09:32.501 }, 00:09:32.501 { 00:09:32.501 "name": "BaseBdev2", 00:09:32.501 "uuid": "0515d073-af2e-58b6-8eb5-4621e76d23cb", 00:09:32.501 "is_configured": true, 00:09:32.501 "data_offset": 2048, 00:09:32.501 "data_size": 63488 00:09:32.501 }, 00:09:32.501 { 00:09:32.501 "name": "BaseBdev3", 00:09:32.501 "uuid": "8227d2f6-0f33-5dfd-856a-b84fe5d7e58b", 00:09:32.501 "is_configured": true, 00:09:32.501 "data_offset": 2048, 00:09:32.501 "data_size": 63488 00:09:32.501 }, 00:09:32.501 { 00:09:32.501 "name": "BaseBdev4", 00:09:32.501 "uuid": "70d2084f-6b61-5ead-9919-6d08e511d48a", 00:09:32.501 "is_configured": true, 00:09:32.501 "data_offset": 2048, 00:09:32.501 "data_size": 63488 00:09:32.501 } 00:09:32.501 ] 00:09:32.501 }' 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.501 01:53:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.820 [2024-12-07 01:53:38.244829] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:32.820 [2024-12-07 01:53:38.244903] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.820 [2024-12-07 01:53:38.247436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.820 [2024-12-07 01:53:38.247532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.820 [2024-12-07 01:53:38.247599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.820 [2024-12-07 01:53:38.247695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:32.820 { 00:09:32.820 "results": [ 00:09:32.820 { 00:09:32.820 "job": "raid_bdev1", 00:09:32.820 "core_mask": "0x1", 00:09:32.820 "workload": "randrw", 00:09:32.820 "percentage": 50, 00:09:32.820 "status": "finished", 00:09:32.820 "queue_depth": 1, 00:09:32.820 "io_size": 131072, 00:09:32.820 "runtime": 1.37667, 00:09:32.820 "iops": 16550.08099253997, 00:09:32.820 "mibps": 2068.7601240674962, 00:09:32.820 "io_failed": 1, 00:09:32.820 "io_timeout": 0, 00:09:32.820 "avg_latency_us": 83.8072043489885, 00:09:32.820 "min_latency_us": 25.9353711790393, 00:09:32.820 "max_latency_us": 1488.1537117903931 00:09:32.820 } 00:09:32.820 ], 00:09:32.820 "core_count": 1 00:09:32.820 } 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81763 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 81763 ']' 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 81763 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:32.820 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81763 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81763' 00:09:33.103 killing process with pid 81763 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 81763 00:09:33.103 [2024-12-07 01:53:38.284898] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 81763 00:09:33.103 [2024-12-07 01:53:38.321278] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xol1ZhIeoM 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.103 01:53:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:33.103 00:09:33.103 real 0m3.340s 00:09:33.103 user 0m4.226s 00:09:33.103 sys 0m0.515s 00:09:33.363 ************************************ 00:09:33.364 END TEST raid_write_error_test 00:09:33.364 ************************************ 00:09:33.364 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.364 01:53:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 01:53:38 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:33.364 01:53:38 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:33.364 01:53:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.364 01:53:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.364 01:53:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 ************************************ 00:09:33.364 START TEST raid_state_function_test 00:09:33.364 ************************************ 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81894 00:09:33.364 Process raid pid: 81894 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81894' 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81894 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 81894 ']' 00:09:33.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.364 01:53:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.364 [2024-12-07 01:53:38.725319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:33.364 [2024-12-07 01:53:38.725518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.624 [2024-12-07 01:53:38.870461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.624 [2024-12-07 01:53:38.914901] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.624 [2024-12-07 01:53:38.956195] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.624 [2024-12-07 01:53:38.956325] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.195 [2024-12-07 01:53:39.549077] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.195 [2024-12-07 01:53:39.549280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.195 [2024-12-07 01:53:39.549320] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.195 [2024-12-07 01:53:39.549348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.195 [2024-12-07 01:53:39.549408] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.195 [2024-12-07 01:53:39.549432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.195 [2024-12-07 01:53:39.549496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:34.195 [2024-12-07 01:53:39.549520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.195 "name": "Existed_Raid", 00:09:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.195 "strip_size_kb": 64, 00:09:34.195 "state": "configuring", 00:09:34.195 "raid_level": "concat", 00:09:34.195 "superblock": false, 00:09:34.195 "num_base_bdevs": 4, 00:09:34.195 "num_base_bdevs_discovered": 0, 00:09:34.195 "num_base_bdevs_operational": 4, 00:09:34.195 "base_bdevs_list": [ 00:09:34.195 { 00:09:34.195 "name": "BaseBdev1", 00:09:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.195 "is_configured": false, 00:09:34.195 "data_offset": 0, 00:09:34.195 "data_size": 0 00:09:34.195 }, 00:09:34.195 { 00:09:34.195 "name": "BaseBdev2", 00:09:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.195 "is_configured": false, 00:09:34.195 "data_offset": 0, 00:09:34.195 "data_size": 0 00:09:34.195 }, 00:09:34.195 { 00:09:34.195 "name": "BaseBdev3", 00:09:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.195 "is_configured": false, 00:09:34.195 "data_offset": 0, 00:09:34.195 "data_size": 0 00:09:34.195 }, 00:09:34.195 { 00:09:34.195 "name": "BaseBdev4", 00:09:34.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.195 "is_configured": false, 00:09:34.195 "data_offset": 0, 00:09:34.195 "data_size": 0 00:09:34.195 } 00:09:34.195 ] 00:09:34.195 }' 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.195 01:53:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.766 [2024-12-07 01:53:40.056118] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.766 [2024-12-07 01:53:40.056201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.766 [2024-12-07 01:53:40.068094] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.766 [2024-12-07 01:53:40.068168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.766 [2024-12-07 01:53:40.068193] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.766 [2024-12-07 01:53:40.068215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.766 [2024-12-07 01:53:40.068233] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.766 [2024-12-07 01:53:40.068253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.766 [2024-12-07 01:53:40.068270] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:34.766 [2024-12-07 01:53:40.068290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.766 [2024-12-07 01:53:40.088661] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.766 BaseBdev1 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.766 [ 00:09:34.766 { 00:09:34.766 "name": "BaseBdev1", 00:09:34.766 "aliases": [ 00:09:34.766 "b01bf697-623f-4ddc-96d3-754e10c1c6cf" 00:09:34.766 ], 00:09:34.766 "product_name": "Malloc disk", 00:09:34.766 "block_size": 512, 00:09:34.766 "num_blocks": 65536, 00:09:34.766 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:34.766 "assigned_rate_limits": { 00:09:34.766 "rw_ios_per_sec": 0, 00:09:34.766 "rw_mbytes_per_sec": 0, 00:09:34.766 "r_mbytes_per_sec": 0, 00:09:34.766 "w_mbytes_per_sec": 0 00:09:34.766 }, 00:09:34.766 "claimed": true, 00:09:34.766 "claim_type": "exclusive_write", 00:09:34.766 "zoned": false, 00:09:34.766 "supported_io_types": { 00:09:34.766 "read": true, 00:09:34.766 "write": true, 00:09:34.766 "unmap": true, 00:09:34.766 "flush": true, 00:09:34.766 "reset": true, 00:09:34.766 "nvme_admin": false, 00:09:34.766 "nvme_io": false, 00:09:34.766 "nvme_io_md": false, 00:09:34.766 "write_zeroes": true, 00:09:34.766 "zcopy": true, 00:09:34.766 "get_zone_info": false, 00:09:34.766 "zone_management": false, 00:09:34.766 "zone_append": false, 00:09:34.766 "compare": false, 00:09:34.766 "compare_and_write": false, 00:09:34.766 "abort": true, 00:09:34.766 "seek_hole": false, 00:09:34.766 "seek_data": false, 00:09:34.766 "copy": true, 00:09:34.766 "nvme_iov_md": false 00:09:34.766 }, 00:09:34.766 "memory_domains": [ 00:09:34.766 { 00:09:34.766 "dma_device_id": "system", 00:09:34.766 "dma_device_type": 1 00:09:34.766 }, 00:09:34.766 { 00:09:34.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.766 "dma_device_type": 2 00:09:34.766 } 00:09:34.766 ], 00:09:34.766 "driver_specific": {} 00:09:34.766 } 00:09:34.766 ] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.766 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.767 "name": "Existed_Raid", 00:09:34.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.767 "strip_size_kb": 64, 00:09:34.767 "state": "configuring", 00:09:34.767 "raid_level": "concat", 00:09:34.767 "superblock": false, 00:09:34.767 "num_base_bdevs": 4, 00:09:34.767 "num_base_bdevs_discovered": 1, 00:09:34.767 "num_base_bdevs_operational": 4, 00:09:34.767 "base_bdevs_list": [ 00:09:34.767 { 00:09:34.767 "name": "BaseBdev1", 00:09:34.767 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:34.767 "is_configured": true, 00:09:34.767 "data_offset": 0, 00:09:34.767 "data_size": 65536 00:09:34.767 }, 00:09:34.767 { 00:09:34.767 "name": "BaseBdev2", 00:09:34.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.767 "is_configured": false, 00:09:34.767 "data_offset": 0, 00:09:34.767 "data_size": 0 00:09:34.767 }, 00:09:34.767 { 00:09:34.767 "name": "BaseBdev3", 00:09:34.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.767 "is_configured": false, 00:09:34.767 "data_offset": 0, 00:09:34.767 "data_size": 0 00:09:34.767 }, 00:09:34.767 { 00:09:34.767 "name": "BaseBdev4", 00:09:34.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.767 "is_configured": false, 00:09:34.767 "data_offset": 0, 00:09:34.767 "data_size": 0 00:09:34.767 } 00:09:34.767 ] 00:09:34.767 }' 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.767 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.337 [2024-12-07 01:53:40.559900] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.337 [2024-12-07 01:53:40.559994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.337 [2024-12-07 01:53:40.571931] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.337 [2024-12-07 01:53:40.573746] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.337 [2024-12-07 01:53:40.573816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.337 [2024-12-07 01:53:40.573843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:35.337 [2024-12-07 01:53:40.573864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:35.337 [2024-12-07 01:53:40.573882] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:35.337 [2024-12-07 01:53:40.573901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.337 "name": "Existed_Raid", 00:09:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.337 "strip_size_kb": 64, 00:09:35.337 "state": "configuring", 00:09:35.337 "raid_level": "concat", 00:09:35.337 "superblock": false, 00:09:35.337 "num_base_bdevs": 4, 00:09:35.337 "num_base_bdevs_discovered": 1, 00:09:35.337 "num_base_bdevs_operational": 4, 00:09:35.337 "base_bdevs_list": [ 00:09:35.337 { 00:09:35.337 "name": "BaseBdev1", 00:09:35.337 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:35.337 "is_configured": true, 00:09:35.337 "data_offset": 0, 00:09:35.337 "data_size": 65536 00:09:35.337 }, 00:09:35.337 { 00:09:35.337 "name": "BaseBdev2", 00:09:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.337 "is_configured": false, 00:09:35.337 "data_offset": 0, 00:09:35.337 "data_size": 0 00:09:35.337 }, 00:09:35.337 { 00:09:35.337 "name": "BaseBdev3", 00:09:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.337 "is_configured": false, 00:09:35.337 "data_offset": 0, 00:09:35.337 "data_size": 0 00:09:35.337 }, 00:09:35.337 { 00:09:35.337 "name": "BaseBdev4", 00:09:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.337 "is_configured": false, 00:09:35.337 "data_offset": 0, 00:09:35.337 "data_size": 0 00:09:35.337 } 00:09:35.337 ] 00:09:35.337 }' 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.337 01:53:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.598 [2024-12-07 01:53:41.044027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.598 BaseBdev2 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.598 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 [ 00:09:35.857 { 00:09:35.857 "name": "BaseBdev2", 00:09:35.857 "aliases": [ 00:09:35.857 "899f85ce-6f6d-4b4f-bbba-2de91d22be5f" 00:09:35.857 ], 00:09:35.857 "product_name": "Malloc disk", 00:09:35.857 "block_size": 512, 00:09:35.857 "num_blocks": 65536, 00:09:35.857 "uuid": "899f85ce-6f6d-4b4f-bbba-2de91d22be5f", 00:09:35.857 "assigned_rate_limits": { 00:09:35.857 "rw_ios_per_sec": 0, 00:09:35.857 "rw_mbytes_per_sec": 0, 00:09:35.857 "r_mbytes_per_sec": 0, 00:09:35.857 "w_mbytes_per_sec": 0 00:09:35.857 }, 00:09:35.857 "claimed": true, 00:09:35.857 "claim_type": "exclusive_write", 00:09:35.857 "zoned": false, 00:09:35.857 "supported_io_types": { 00:09:35.857 "read": true, 00:09:35.857 "write": true, 00:09:35.857 "unmap": true, 00:09:35.857 "flush": true, 00:09:35.857 "reset": true, 00:09:35.857 "nvme_admin": false, 00:09:35.857 "nvme_io": false, 00:09:35.857 "nvme_io_md": false, 00:09:35.857 "write_zeroes": true, 00:09:35.857 "zcopy": true, 00:09:35.857 "get_zone_info": false, 00:09:35.857 "zone_management": false, 00:09:35.857 "zone_append": false, 00:09:35.857 "compare": false, 00:09:35.857 "compare_and_write": false, 00:09:35.857 "abort": true, 00:09:35.857 "seek_hole": false, 00:09:35.857 "seek_data": false, 00:09:35.857 "copy": true, 00:09:35.857 "nvme_iov_md": false 00:09:35.857 }, 00:09:35.857 "memory_domains": [ 00:09:35.857 { 00:09:35.857 "dma_device_id": "system", 00:09:35.857 "dma_device_type": 1 00:09:35.857 }, 00:09:35.857 { 00:09:35.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.857 "dma_device_type": 2 00:09:35.857 } 00:09:35.857 ], 00:09:35.857 "driver_specific": {} 00:09:35.857 } 00:09:35.857 ] 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.857 "name": "Existed_Raid", 00:09:35.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.857 "strip_size_kb": 64, 00:09:35.857 "state": "configuring", 00:09:35.857 "raid_level": "concat", 00:09:35.857 "superblock": false, 00:09:35.857 "num_base_bdevs": 4, 00:09:35.857 "num_base_bdevs_discovered": 2, 00:09:35.857 "num_base_bdevs_operational": 4, 00:09:35.857 "base_bdevs_list": [ 00:09:35.857 { 00:09:35.857 "name": "BaseBdev1", 00:09:35.857 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:35.857 "is_configured": true, 00:09:35.857 "data_offset": 0, 00:09:35.857 "data_size": 65536 00:09:35.857 }, 00:09:35.857 { 00:09:35.857 "name": "BaseBdev2", 00:09:35.857 "uuid": "899f85ce-6f6d-4b4f-bbba-2de91d22be5f", 00:09:35.857 "is_configured": true, 00:09:35.857 "data_offset": 0, 00:09:35.857 "data_size": 65536 00:09:35.857 }, 00:09:35.857 { 00:09:35.857 "name": "BaseBdev3", 00:09:35.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.857 "is_configured": false, 00:09:35.857 "data_offset": 0, 00:09:35.857 "data_size": 0 00:09:35.857 }, 00:09:35.857 { 00:09:35.857 "name": "BaseBdev4", 00:09:35.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.857 "is_configured": false, 00:09:35.857 "data_offset": 0, 00:09:35.857 "data_size": 0 00:09:35.857 } 00:09:35.857 ] 00:09:35.857 }' 00:09:35.857 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.858 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 [2024-12-07 01:53:41.526197] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.118 BaseBdev3 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 [ 00:09:36.118 { 00:09:36.118 "name": "BaseBdev3", 00:09:36.118 "aliases": [ 00:09:36.118 "b0c31828-6ecc-4d62-9392-d1b1b79a6fd8" 00:09:36.118 ], 00:09:36.118 "product_name": "Malloc disk", 00:09:36.118 "block_size": 512, 00:09:36.118 "num_blocks": 65536, 00:09:36.118 "uuid": "b0c31828-6ecc-4d62-9392-d1b1b79a6fd8", 00:09:36.118 "assigned_rate_limits": { 00:09:36.118 "rw_ios_per_sec": 0, 00:09:36.118 "rw_mbytes_per_sec": 0, 00:09:36.118 "r_mbytes_per_sec": 0, 00:09:36.118 "w_mbytes_per_sec": 0 00:09:36.118 }, 00:09:36.118 "claimed": true, 00:09:36.118 "claim_type": "exclusive_write", 00:09:36.118 "zoned": false, 00:09:36.118 "supported_io_types": { 00:09:36.118 "read": true, 00:09:36.118 "write": true, 00:09:36.118 "unmap": true, 00:09:36.118 "flush": true, 00:09:36.118 "reset": true, 00:09:36.118 "nvme_admin": false, 00:09:36.118 "nvme_io": false, 00:09:36.118 "nvme_io_md": false, 00:09:36.118 "write_zeroes": true, 00:09:36.118 "zcopy": true, 00:09:36.118 "get_zone_info": false, 00:09:36.118 "zone_management": false, 00:09:36.118 "zone_append": false, 00:09:36.118 "compare": false, 00:09:36.118 "compare_and_write": false, 00:09:36.118 "abort": true, 00:09:36.118 "seek_hole": false, 00:09:36.118 "seek_data": false, 00:09:36.118 "copy": true, 00:09:36.118 "nvme_iov_md": false 00:09:36.118 }, 00:09:36.118 "memory_domains": [ 00:09:36.118 { 00:09:36.118 "dma_device_id": "system", 00:09:36.118 "dma_device_type": 1 00:09:36.118 }, 00:09:36.118 { 00:09:36.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.118 "dma_device_type": 2 00:09:36.118 } 00:09:36.118 ], 00:09:36.118 "driver_specific": {} 00:09:36.118 } 00:09:36.118 ] 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.118 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.378 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.378 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.378 "name": "Existed_Raid", 00:09:36.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.378 "strip_size_kb": 64, 00:09:36.378 "state": "configuring", 00:09:36.378 "raid_level": "concat", 00:09:36.378 "superblock": false, 00:09:36.378 "num_base_bdevs": 4, 00:09:36.378 "num_base_bdevs_discovered": 3, 00:09:36.378 "num_base_bdevs_operational": 4, 00:09:36.378 "base_bdevs_list": [ 00:09:36.378 { 00:09:36.378 "name": "BaseBdev1", 00:09:36.378 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:36.378 "is_configured": true, 00:09:36.378 "data_offset": 0, 00:09:36.378 "data_size": 65536 00:09:36.378 }, 00:09:36.378 { 00:09:36.378 "name": "BaseBdev2", 00:09:36.378 "uuid": "899f85ce-6f6d-4b4f-bbba-2de91d22be5f", 00:09:36.378 "is_configured": true, 00:09:36.378 "data_offset": 0, 00:09:36.378 "data_size": 65536 00:09:36.378 }, 00:09:36.378 { 00:09:36.378 "name": "BaseBdev3", 00:09:36.378 "uuid": "b0c31828-6ecc-4d62-9392-d1b1b79a6fd8", 00:09:36.378 "is_configured": true, 00:09:36.378 "data_offset": 0, 00:09:36.378 "data_size": 65536 00:09:36.378 }, 00:09:36.378 { 00:09:36.378 "name": "BaseBdev4", 00:09:36.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.378 "is_configured": false, 00:09:36.378 "data_offset": 0, 00:09:36.378 "data_size": 0 00:09:36.379 } 00:09:36.379 ] 00:09:36.379 }' 00:09:36.379 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.379 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.639 [2024-12-07 01:53:41.968325] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:36.639 [2024-12-07 01:53:41.968377] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:36.639 [2024-12-07 01:53:41.968385] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:36.639 [2024-12-07 01:53:41.968679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:36.639 [2024-12-07 01:53:41.968831] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:36.639 [2024-12-07 01:53:41.968857] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:36.639 [2024-12-07 01:53:41.969035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.639 BaseBdev4 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.639 01:53:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.639 [ 00:09:36.639 { 00:09:36.639 "name": "BaseBdev4", 00:09:36.639 "aliases": [ 00:09:36.639 "45d591e7-330e-442e-a649-1e0d2d4ce5aa" 00:09:36.639 ], 00:09:36.639 "product_name": "Malloc disk", 00:09:36.639 "block_size": 512, 00:09:36.639 "num_blocks": 65536, 00:09:36.639 "uuid": "45d591e7-330e-442e-a649-1e0d2d4ce5aa", 00:09:36.639 "assigned_rate_limits": { 00:09:36.639 "rw_ios_per_sec": 0, 00:09:36.639 "rw_mbytes_per_sec": 0, 00:09:36.639 "r_mbytes_per_sec": 0, 00:09:36.639 "w_mbytes_per_sec": 0 00:09:36.639 }, 00:09:36.639 "claimed": true, 00:09:36.639 "claim_type": "exclusive_write", 00:09:36.639 "zoned": false, 00:09:36.639 "supported_io_types": { 00:09:36.639 "read": true, 00:09:36.639 "write": true, 00:09:36.639 "unmap": true, 00:09:36.639 "flush": true, 00:09:36.639 "reset": true, 00:09:36.639 "nvme_admin": false, 00:09:36.639 "nvme_io": false, 00:09:36.639 "nvme_io_md": false, 00:09:36.639 "write_zeroes": true, 00:09:36.639 "zcopy": true, 00:09:36.639 "get_zone_info": false, 00:09:36.639 "zone_management": false, 00:09:36.639 "zone_append": false, 00:09:36.639 "compare": false, 00:09:36.639 "compare_and_write": false, 00:09:36.639 "abort": true, 00:09:36.639 "seek_hole": false, 00:09:36.639 "seek_data": false, 00:09:36.639 "copy": true, 00:09:36.639 "nvme_iov_md": false 00:09:36.639 }, 00:09:36.639 "memory_domains": [ 00:09:36.639 { 00:09:36.639 "dma_device_id": "system", 00:09:36.639 "dma_device_type": 1 00:09:36.639 }, 00:09:36.639 { 00:09:36.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.639 "dma_device_type": 2 00:09:36.639 } 00:09:36.639 ], 00:09:36.639 "driver_specific": {} 00:09:36.639 } 00:09:36.639 ] 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.639 "name": "Existed_Raid", 00:09:36.639 "uuid": "575dd1f7-a642-488a-8c58-baa1b42acf83", 00:09:36.639 "strip_size_kb": 64, 00:09:36.639 "state": "online", 00:09:36.639 "raid_level": "concat", 00:09:36.639 "superblock": false, 00:09:36.639 "num_base_bdevs": 4, 00:09:36.639 "num_base_bdevs_discovered": 4, 00:09:36.639 "num_base_bdevs_operational": 4, 00:09:36.639 "base_bdevs_list": [ 00:09:36.639 { 00:09:36.639 "name": "BaseBdev1", 00:09:36.639 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:36.639 "is_configured": true, 00:09:36.639 "data_offset": 0, 00:09:36.639 "data_size": 65536 00:09:36.639 }, 00:09:36.639 { 00:09:36.639 "name": "BaseBdev2", 00:09:36.639 "uuid": "899f85ce-6f6d-4b4f-bbba-2de91d22be5f", 00:09:36.639 "is_configured": true, 00:09:36.639 "data_offset": 0, 00:09:36.639 "data_size": 65536 00:09:36.639 }, 00:09:36.639 { 00:09:36.639 "name": "BaseBdev3", 00:09:36.639 "uuid": "b0c31828-6ecc-4d62-9392-d1b1b79a6fd8", 00:09:36.639 "is_configured": true, 00:09:36.639 "data_offset": 0, 00:09:36.639 "data_size": 65536 00:09:36.639 }, 00:09:36.639 { 00:09:36.639 "name": "BaseBdev4", 00:09:36.639 "uuid": "45d591e7-330e-442e-a649-1e0d2d4ce5aa", 00:09:36.639 "is_configured": true, 00:09:36.639 "data_offset": 0, 00:09:36.639 "data_size": 65536 00:09:36.639 } 00:09:36.639 ] 00:09:36.639 }' 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.639 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.210 [2024-12-07 01:53:42.451856] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.210 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.210 "name": "Existed_Raid", 00:09:37.210 "aliases": [ 00:09:37.210 "575dd1f7-a642-488a-8c58-baa1b42acf83" 00:09:37.210 ], 00:09:37.210 "product_name": "Raid Volume", 00:09:37.210 "block_size": 512, 00:09:37.210 "num_blocks": 262144, 00:09:37.210 "uuid": "575dd1f7-a642-488a-8c58-baa1b42acf83", 00:09:37.210 "assigned_rate_limits": { 00:09:37.210 "rw_ios_per_sec": 0, 00:09:37.210 "rw_mbytes_per_sec": 0, 00:09:37.210 "r_mbytes_per_sec": 0, 00:09:37.210 "w_mbytes_per_sec": 0 00:09:37.210 }, 00:09:37.210 "claimed": false, 00:09:37.210 "zoned": false, 00:09:37.210 "supported_io_types": { 00:09:37.210 "read": true, 00:09:37.210 "write": true, 00:09:37.210 "unmap": true, 00:09:37.210 "flush": true, 00:09:37.210 "reset": true, 00:09:37.210 "nvme_admin": false, 00:09:37.210 "nvme_io": false, 00:09:37.210 "nvme_io_md": false, 00:09:37.210 "write_zeroes": true, 00:09:37.210 "zcopy": false, 00:09:37.210 "get_zone_info": false, 00:09:37.210 "zone_management": false, 00:09:37.210 "zone_append": false, 00:09:37.210 "compare": false, 00:09:37.210 "compare_and_write": false, 00:09:37.210 "abort": false, 00:09:37.210 "seek_hole": false, 00:09:37.210 "seek_data": false, 00:09:37.210 "copy": false, 00:09:37.210 "nvme_iov_md": false 00:09:37.210 }, 00:09:37.210 "memory_domains": [ 00:09:37.210 { 00:09:37.210 "dma_device_id": "system", 00:09:37.210 "dma_device_type": 1 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.210 "dma_device_type": 2 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "dma_device_id": "system", 00:09:37.210 "dma_device_type": 1 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.210 "dma_device_type": 2 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "dma_device_id": "system", 00:09:37.210 "dma_device_type": 1 00:09:37.210 }, 00:09:37.210 { 00:09:37.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.210 "dma_device_type": 2 00:09:37.210 }, 00:09:37.210 { 00:09:37.211 "dma_device_id": "system", 00:09:37.211 "dma_device_type": 1 00:09:37.211 }, 00:09:37.211 { 00:09:37.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.211 "dma_device_type": 2 00:09:37.211 } 00:09:37.211 ], 00:09:37.211 "driver_specific": { 00:09:37.211 "raid": { 00:09:37.211 "uuid": "575dd1f7-a642-488a-8c58-baa1b42acf83", 00:09:37.211 "strip_size_kb": 64, 00:09:37.211 "state": "online", 00:09:37.211 "raid_level": "concat", 00:09:37.211 "superblock": false, 00:09:37.211 "num_base_bdevs": 4, 00:09:37.211 "num_base_bdevs_discovered": 4, 00:09:37.211 "num_base_bdevs_operational": 4, 00:09:37.211 "base_bdevs_list": [ 00:09:37.211 { 00:09:37.211 "name": "BaseBdev1", 00:09:37.211 "uuid": "b01bf697-623f-4ddc-96d3-754e10c1c6cf", 00:09:37.211 "is_configured": true, 00:09:37.211 "data_offset": 0, 00:09:37.211 "data_size": 65536 00:09:37.211 }, 00:09:37.211 { 00:09:37.211 "name": "BaseBdev2", 00:09:37.211 "uuid": "899f85ce-6f6d-4b4f-bbba-2de91d22be5f", 00:09:37.211 "is_configured": true, 00:09:37.211 "data_offset": 0, 00:09:37.211 "data_size": 65536 00:09:37.211 }, 00:09:37.211 { 00:09:37.211 "name": "BaseBdev3", 00:09:37.211 "uuid": "b0c31828-6ecc-4d62-9392-d1b1b79a6fd8", 00:09:37.211 "is_configured": true, 00:09:37.211 "data_offset": 0, 00:09:37.211 "data_size": 65536 00:09:37.211 }, 00:09:37.211 { 00:09:37.211 "name": "BaseBdev4", 00:09:37.211 "uuid": "45d591e7-330e-442e-a649-1e0d2d4ce5aa", 00:09:37.211 "is_configured": true, 00:09:37.211 "data_offset": 0, 00:09:37.211 "data_size": 65536 00:09:37.211 } 00:09:37.211 ] 00:09:37.211 } 00:09:37.211 } 00:09:37.211 }' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:37.211 BaseBdev2 00:09:37.211 BaseBdev3 00:09:37.211 BaseBdev4' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.211 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.470 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.471 [2024-12-07 01:53:42.735148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.471 [2024-12-07 01:53:42.735180] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.471 [2024-12-07 01:53:42.735227] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.471 "name": "Existed_Raid", 00:09:37.471 "uuid": "575dd1f7-a642-488a-8c58-baa1b42acf83", 00:09:37.471 "strip_size_kb": 64, 00:09:37.471 "state": "offline", 00:09:37.471 "raid_level": "concat", 00:09:37.471 "superblock": false, 00:09:37.471 "num_base_bdevs": 4, 00:09:37.471 "num_base_bdevs_discovered": 3, 00:09:37.471 "num_base_bdevs_operational": 3, 00:09:37.471 "base_bdevs_list": [ 00:09:37.471 { 00:09:37.471 "name": null, 00:09:37.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.471 "is_configured": false, 00:09:37.471 "data_offset": 0, 00:09:37.471 "data_size": 65536 00:09:37.471 }, 00:09:37.471 { 00:09:37.471 "name": "BaseBdev2", 00:09:37.471 "uuid": "899f85ce-6f6d-4b4f-bbba-2de91d22be5f", 00:09:37.471 "is_configured": true, 00:09:37.471 "data_offset": 0, 00:09:37.471 "data_size": 65536 00:09:37.471 }, 00:09:37.471 { 00:09:37.471 "name": "BaseBdev3", 00:09:37.471 "uuid": "b0c31828-6ecc-4d62-9392-d1b1b79a6fd8", 00:09:37.471 "is_configured": true, 00:09:37.471 "data_offset": 0, 00:09:37.471 "data_size": 65536 00:09:37.471 }, 00:09:37.471 { 00:09:37.471 "name": "BaseBdev4", 00:09:37.471 "uuid": "45d591e7-330e-442e-a649-1e0d2d4ce5aa", 00:09:37.471 "is_configured": true, 00:09:37.471 "data_offset": 0, 00:09:37.471 "data_size": 65536 00:09:37.471 } 00:09:37.471 ] 00:09:37.471 }' 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.471 01:53:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.730 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.731 [2024-12-07 01:53:43.149449] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.731 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 [2024-12-07 01:53:43.196550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 [2024-12-07 01:53:43.263553] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:37.991 [2024-12-07 01:53:43.263602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 BaseBdev2 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.991 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.991 [ 00:09:37.991 { 00:09:37.991 "name": "BaseBdev2", 00:09:37.991 "aliases": [ 00:09:37.991 "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7" 00:09:37.991 ], 00:09:37.992 "product_name": "Malloc disk", 00:09:37.992 "block_size": 512, 00:09:37.992 "num_blocks": 65536, 00:09:37.992 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:37.992 "assigned_rate_limits": { 00:09:37.992 "rw_ios_per_sec": 0, 00:09:37.992 "rw_mbytes_per_sec": 0, 00:09:37.992 "r_mbytes_per_sec": 0, 00:09:37.992 "w_mbytes_per_sec": 0 00:09:37.992 }, 00:09:37.992 "claimed": false, 00:09:37.992 "zoned": false, 00:09:37.992 "supported_io_types": { 00:09:37.992 "read": true, 00:09:37.992 "write": true, 00:09:37.992 "unmap": true, 00:09:37.992 "flush": true, 00:09:37.992 "reset": true, 00:09:37.992 "nvme_admin": false, 00:09:37.992 "nvme_io": false, 00:09:37.992 "nvme_io_md": false, 00:09:37.992 "write_zeroes": true, 00:09:37.992 "zcopy": true, 00:09:37.992 "get_zone_info": false, 00:09:37.992 "zone_management": false, 00:09:37.992 "zone_append": false, 00:09:37.992 "compare": false, 00:09:37.992 "compare_and_write": false, 00:09:37.992 "abort": true, 00:09:37.992 "seek_hole": false, 00:09:37.992 "seek_data": false, 00:09:37.992 "copy": true, 00:09:37.992 "nvme_iov_md": false 00:09:37.992 }, 00:09:37.992 "memory_domains": [ 00:09:37.992 { 00:09:37.992 "dma_device_id": "system", 00:09:37.992 "dma_device_type": 1 00:09:37.992 }, 00:09:37.992 { 00:09:37.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.992 "dma_device_type": 2 00:09:37.992 } 00:09:37.992 ], 00:09:37.992 "driver_specific": {} 00:09:37.992 } 00:09:37.992 ] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 BaseBdev3 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 [ 00:09:37.992 { 00:09:37.992 "name": "BaseBdev3", 00:09:37.992 "aliases": [ 00:09:37.992 "fb0510a2-43ba-4b24-bf84-cb8b89116860" 00:09:37.992 ], 00:09:37.992 "product_name": "Malloc disk", 00:09:37.992 "block_size": 512, 00:09:37.992 "num_blocks": 65536, 00:09:37.992 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:37.992 "assigned_rate_limits": { 00:09:37.992 "rw_ios_per_sec": 0, 00:09:37.992 "rw_mbytes_per_sec": 0, 00:09:37.992 "r_mbytes_per_sec": 0, 00:09:37.992 "w_mbytes_per_sec": 0 00:09:37.992 }, 00:09:37.992 "claimed": false, 00:09:37.992 "zoned": false, 00:09:37.992 "supported_io_types": { 00:09:37.992 "read": true, 00:09:37.992 "write": true, 00:09:37.992 "unmap": true, 00:09:37.992 "flush": true, 00:09:37.992 "reset": true, 00:09:37.992 "nvme_admin": false, 00:09:37.992 "nvme_io": false, 00:09:37.992 "nvme_io_md": false, 00:09:37.992 "write_zeroes": true, 00:09:37.992 "zcopy": true, 00:09:37.992 "get_zone_info": false, 00:09:37.992 "zone_management": false, 00:09:37.992 "zone_append": false, 00:09:37.992 "compare": false, 00:09:37.992 "compare_and_write": false, 00:09:37.992 "abort": true, 00:09:37.992 "seek_hole": false, 00:09:37.992 "seek_data": false, 00:09:37.992 "copy": true, 00:09:37.992 "nvme_iov_md": false 00:09:37.992 }, 00:09:37.992 "memory_domains": [ 00:09:37.992 { 00:09:37.992 "dma_device_id": "system", 00:09:37.992 "dma_device_type": 1 00:09:37.992 }, 00:09:37.992 { 00:09:37.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.992 "dma_device_type": 2 00:09:37.992 } 00:09:37.992 ], 00:09:37.992 "driver_specific": {} 00:09:37.992 } 00:09:37.992 ] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 BaseBdev4 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.992 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.252 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.252 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:38.252 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.252 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.252 [ 00:09:38.252 { 00:09:38.252 "name": "BaseBdev4", 00:09:38.252 "aliases": [ 00:09:38.252 "0720f35f-a9e2-43e8-961e-8e7cce70fb87" 00:09:38.252 ], 00:09:38.252 "product_name": "Malloc disk", 00:09:38.252 "block_size": 512, 00:09:38.252 "num_blocks": 65536, 00:09:38.252 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:38.252 "assigned_rate_limits": { 00:09:38.252 "rw_ios_per_sec": 0, 00:09:38.252 "rw_mbytes_per_sec": 0, 00:09:38.252 "r_mbytes_per_sec": 0, 00:09:38.252 "w_mbytes_per_sec": 0 00:09:38.252 }, 00:09:38.252 "claimed": false, 00:09:38.252 "zoned": false, 00:09:38.252 "supported_io_types": { 00:09:38.252 "read": true, 00:09:38.252 "write": true, 00:09:38.252 "unmap": true, 00:09:38.252 "flush": true, 00:09:38.252 "reset": true, 00:09:38.252 "nvme_admin": false, 00:09:38.252 "nvme_io": false, 00:09:38.252 "nvme_io_md": false, 00:09:38.252 "write_zeroes": true, 00:09:38.252 "zcopy": true, 00:09:38.252 "get_zone_info": false, 00:09:38.253 "zone_management": false, 00:09:38.253 "zone_append": false, 00:09:38.253 "compare": false, 00:09:38.253 "compare_and_write": false, 00:09:38.253 "abort": true, 00:09:38.253 "seek_hole": false, 00:09:38.253 "seek_data": false, 00:09:38.253 "copy": true, 00:09:38.253 "nvme_iov_md": false 00:09:38.253 }, 00:09:38.253 "memory_domains": [ 00:09:38.253 { 00:09:38.253 "dma_device_id": "system", 00:09:38.253 "dma_device_type": 1 00:09:38.253 }, 00:09:38.253 { 00:09:38.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.253 "dma_device_type": 2 00:09:38.253 } 00:09:38.253 ], 00:09:38.253 "driver_specific": {} 00:09:38.253 } 00:09:38.253 ] 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.253 [2024-12-07 01:53:43.486712] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.253 [2024-12-07 01:53:43.486757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.253 [2024-12-07 01:53:43.486778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.253 [2024-12-07 01:53:43.488610] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.253 [2024-12-07 01:53:43.488671] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.253 "name": "Existed_Raid", 00:09:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.253 "strip_size_kb": 64, 00:09:38.253 "state": "configuring", 00:09:38.253 "raid_level": "concat", 00:09:38.253 "superblock": false, 00:09:38.253 "num_base_bdevs": 4, 00:09:38.253 "num_base_bdevs_discovered": 3, 00:09:38.253 "num_base_bdevs_operational": 4, 00:09:38.253 "base_bdevs_list": [ 00:09:38.253 { 00:09:38.253 "name": "BaseBdev1", 00:09:38.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.253 "is_configured": false, 00:09:38.253 "data_offset": 0, 00:09:38.253 "data_size": 0 00:09:38.253 }, 00:09:38.253 { 00:09:38.253 "name": "BaseBdev2", 00:09:38.253 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:38.253 "is_configured": true, 00:09:38.253 "data_offset": 0, 00:09:38.253 "data_size": 65536 00:09:38.253 }, 00:09:38.253 { 00:09:38.253 "name": "BaseBdev3", 00:09:38.253 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:38.253 "is_configured": true, 00:09:38.253 "data_offset": 0, 00:09:38.253 "data_size": 65536 00:09:38.253 }, 00:09:38.253 { 00:09:38.253 "name": "BaseBdev4", 00:09:38.253 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:38.253 "is_configured": true, 00:09:38.253 "data_offset": 0, 00:09:38.253 "data_size": 65536 00:09:38.253 } 00:09:38.253 ] 00:09:38.253 }' 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.253 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.512 [2024-12-07 01:53:43.917960] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.512 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.771 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.771 "name": "Existed_Raid", 00:09:38.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.771 "strip_size_kb": 64, 00:09:38.771 "state": "configuring", 00:09:38.771 "raid_level": "concat", 00:09:38.771 "superblock": false, 00:09:38.771 "num_base_bdevs": 4, 00:09:38.771 "num_base_bdevs_discovered": 2, 00:09:38.771 "num_base_bdevs_operational": 4, 00:09:38.771 "base_bdevs_list": [ 00:09:38.771 { 00:09:38.771 "name": "BaseBdev1", 00:09:38.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:38.771 "is_configured": false, 00:09:38.771 "data_offset": 0, 00:09:38.771 "data_size": 0 00:09:38.771 }, 00:09:38.771 { 00:09:38.771 "name": null, 00:09:38.771 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:38.771 "is_configured": false, 00:09:38.771 "data_offset": 0, 00:09:38.771 "data_size": 65536 00:09:38.771 }, 00:09:38.771 { 00:09:38.771 "name": "BaseBdev3", 00:09:38.771 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:38.771 "is_configured": true, 00:09:38.771 "data_offset": 0, 00:09:38.771 "data_size": 65536 00:09:38.771 }, 00:09:38.771 { 00:09:38.771 "name": "BaseBdev4", 00:09:38.771 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:38.771 "is_configured": true, 00:09:38.772 "data_offset": 0, 00:09:38.772 "data_size": 65536 00:09:38.772 } 00:09:38.772 ] 00:09:38.772 }' 00:09:38.772 01:53:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.772 01:53:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.031 [2024-12-07 01:53:44.423769] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.031 BaseBdev1 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.031 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.031 [ 00:09:39.031 { 00:09:39.031 "name": "BaseBdev1", 00:09:39.031 "aliases": [ 00:09:39.031 "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a" 00:09:39.031 ], 00:09:39.031 "product_name": "Malloc disk", 00:09:39.031 "block_size": 512, 00:09:39.031 "num_blocks": 65536, 00:09:39.031 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:39.031 "assigned_rate_limits": { 00:09:39.031 "rw_ios_per_sec": 0, 00:09:39.031 "rw_mbytes_per_sec": 0, 00:09:39.032 "r_mbytes_per_sec": 0, 00:09:39.032 "w_mbytes_per_sec": 0 00:09:39.032 }, 00:09:39.032 "claimed": true, 00:09:39.032 "claim_type": "exclusive_write", 00:09:39.032 "zoned": false, 00:09:39.032 "supported_io_types": { 00:09:39.032 "read": true, 00:09:39.032 "write": true, 00:09:39.032 "unmap": true, 00:09:39.032 "flush": true, 00:09:39.032 "reset": true, 00:09:39.032 "nvme_admin": false, 00:09:39.032 "nvme_io": false, 00:09:39.032 "nvme_io_md": false, 00:09:39.032 "write_zeroes": true, 00:09:39.032 "zcopy": true, 00:09:39.032 "get_zone_info": false, 00:09:39.032 "zone_management": false, 00:09:39.032 "zone_append": false, 00:09:39.032 "compare": false, 00:09:39.032 "compare_and_write": false, 00:09:39.032 "abort": true, 00:09:39.032 "seek_hole": false, 00:09:39.032 "seek_data": false, 00:09:39.032 "copy": true, 00:09:39.032 "nvme_iov_md": false 00:09:39.032 }, 00:09:39.032 "memory_domains": [ 00:09:39.032 { 00:09:39.032 "dma_device_id": "system", 00:09:39.032 "dma_device_type": 1 00:09:39.032 }, 00:09:39.032 { 00:09:39.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.032 "dma_device_type": 2 00:09:39.032 } 00:09:39.032 ], 00:09:39.032 "driver_specific": {} 00:09:39.032 } 00:09:39.032 ] 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.032 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.291 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.291 "name": "Existed_Raid", 00:09:39.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.291 "strip_size_kb": 64, 00:09:39.291 "state": "configuring", 00:09:39.291 "raid_level": "concat", 00:09:39.291 "superblock": false, 00:09:39.291 "num_base_bdevs": 4, 00:09:39.291 "num_base_bdevs_discovered": 3, 00:09:39.291 "num_base_bdevs_operational": 4, 00:09:39.291 "base_bdevs_list": [ 00:09:39.291 { 00:09:39.291 "name": "BaseBdev1", 00:09:39.291 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:39.291 "is_configured": true, 00:09:39.291 "data_offset": 0, 00:09:39.291 "data_size": 65536 00:09:39.291 }, 00:09:39.291 { 00:09:39.291 "name": null, 00:09:39.291 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:39.291 "is_configured": false, 00:09:39.291 "data_offset": 0, 00:09:39.291 "data_size": 65536 00:09:39.291 }, 00:09:39.291 { 00:09:39.291 "name": "BaseBdev3", 00:09:39.291 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:39.291 "is_configured": true, 00:09:39.291 "data_offset": 0, 00:09:39.291 "data_size": 65536 00:09:39.291 }, 00:09:39.291 { 00:09:39.291 "name": "BaseBdev4", 00:09:39.291 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:39.291 "is_configured": true, 00:09:39.291 "data_offset": 0, 00:09:39.291 "data_size": 65536 00:09:39.291 } 00:09:39.291 ] 00:09:39.291 }' 00:09:39.291 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.291 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 [2024-12-07 01:53:44.910989] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.550 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.550 "name": "Existed_Raid", 00:09:39.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.550 "strip_size_kb": 64, 00:09:39.550 "state": "configuring", 00:09:39.550 "raid_level": "concat", 00:09:39.550 "superblock": false, 00:09:39.550 "num_base_bdevs": 4, 00:09:39.550 "num_base_bdevs_discovered": 2, 00:09:39.550 "num_base_bdevs_operational": 4, 00:09:39.550 "base_bdevs_list": [ 00:09:39.550 { 00:09:39.550 "name": "BaseBdev1", 00:09:39.550 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:39.550 "is_configured": true, 00:09:39.550 "data_offset": 0, 00:09:39.550 "data_size": 65536 00:09:39.550 }, 00:09:39.550 { 00:09:39.550 "name": null, 00:09:39.550 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:39.550 "is_configured": false, 00:09:39.550 "data_offset": 0, 00:09:39.550 "data_size": 65536 00:09:39.550 }, 00:09:39.550 { 00:09:39.550 "name": null, 00:09:39.550 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:39.550 "is_configured": false, 00:09:39.550 "data_offset": 0, 00:09:39.550 "data_size": 65536 00:09:39.551 }, 00:09:39.551 { 00:09:39.551 "name": "BaseBdev4", 00:09:39.551 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:39.551 "is_configured": true, 00:09:39.551 "data_offset": 0, 00:09:39.551 "data_size": 65536 00:09:39.551 } 00:09:39.551 ] 00:09:39.551 }' 00:09:39.551 01:53:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.551 01:53:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.117 [2024-12-07 01:53:45.410186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.117 "name": "Existed_Raid", 00:09:40.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.117 "strip_size_kb": 64, 00:09:40.117 "state": "configuring", 00:09:40.117 "raid_level": "concat", 00:09:40.117 "superblock": false, 00:09:40.117 "num_base_bdevs": 4, 00:09:40.117 "num_base_bdevs_discovered": 3, 00:09:40.117 "num_base_bdevs_operational": 4, 00:09:40.117 "base_bdevs_list": [ 00:09:40.117 { 00:09:40.117 "name": "BaseBdev1", 00:09:40.117 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:40.117 "is_configured": true, 00:09:40.117 "data_offset": 0, 00:09:40.117 "data_size": 65536 00:09:40.117 }, 00:09:40.117 { 00:09:40.117 "name": null, 00:09:40.117 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:40.117 "is_configured": false, 00:09:40.117 "data_offset": 0, 00:09:40.117 "data_size": 65536 00:09:40.117 }, 00:09:40.117 { 00:09:40.117 "name": "BaseBdev3", 00:09:40.117 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:40.117 "is_configured": true, 00:09:40.117 "data_offset": 0, 00:09:40.117 "data_size": 65536 00:09:40.117 }, 00:09:40.117 { 00:09:40.117 "name": "BaseBdev4", 00:09:40.117 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:40.117 "is_configured": true, 00:09:40.117 "data_offset": 0, 00:09:40.117 "data_size": 65536 00:09:40.117 } 00:09:40.117 ] 00:09:40.117 }' 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.117 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.375 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:40.375 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.376 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.635 [2024-12-07 01:53:45.837446] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.635 "name": "Existed_Raid", 00:09:40.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.635 "strip_size_kb": 64, 00:09:40.635 "state": "configuring", 00:09:40.635 "raid_level": "concat", 00:09:40.635 "superblock": false, 00:09:40.635 "num_base_bdevs": 4, 00:09:40.635 "num_base_bdevs_discovered": 2, 00:09:40.635 "num_base_bdevs_operational": 4, 00:09:40.635 "base_bdevs_list": [ 00:09:40.635 { 00:09:40.635 "name": null, 00:09:40.635 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:40.635 "is_configured": false, 00:09:40.635 "data_offset": 0, 00:09:40.635 "data_size": 65536 00:09:40.635 }, 00:09:40.635 { 00:09:40.635 "name": null, 00:09:40.635 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:40.635 "is_configured": false, 00:09:40.635 "data_offset": 0, 00:09:40.635 "data_size": 65536 00:09:40.635 }, 00:09:40.635 { 00:09:40.635 "name": "BaseBdev3", 00:09:40.635 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:40.635 "is_configured": true, 00:09:40.635 "data_offset": 0, 00:09:40.635 "data_size": 65536 00:09:40.635 }, 00:09:40.635 { 00:09:40.635 "name": "BaseBdev4", 00:09:40.635 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:40.635 "is_configured": true, 00:09:40.635 "data_offset": 0, 00:09:40.635 "data_size": 65536 00:09:40.635 } 00:09:40.635 ] 00:09:40.635 }' 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.635 01:53:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.914 [2024-12-07 01:53:46.306833] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.914 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.914 "name": "Existed_Raid", 00:09:40.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.914 "strip_size_kb": 64, 00:09:40.914 "state": "configuring", 00:09:40.914 "raid_level": "concat", 00:09:40.914 "superblock": false, 00:09:40.914 "num_base_bdevs": 4, 00:09:40.914 "num_base_bdevs_discovered": 3, 00:09:40.914 "num_base_bdevs_operational": 4, 00:09:40.914 "base_bdevs_list": [ 00:09:40.914 { 00:09:40.914 "name": null, 00:09:40.914 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:40.914 "is_configured": false, 00:09:40.914 "data_offset": 0, 00:09:40.914 "data_size": 65536 00:09:40.914 }, 00:09:40.914 { 00:09:40.914 "name": "BaseBdev2", 00:09:40.915 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:40.915 "is_configured": true, 00:09:40.915 "data_offset": 0, 00:09:40.915 "data_size": 65536 00:09:40.915 }, 00:09:40.915 { 00:09:40.915 "name": "BaseBdev3", 00:09:40.915 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:40.915 "is_configured": true, 00:09:40.915 "data_offset": 0, 00:09:40.915 "data_size": 65536 00:09:40.915 }, 00:09:40.915 { 00:09:40.915 "name": "BaseBdev4", 00:09:40.915 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:40.915 "is_configured": true, 00:09:40.915 "data_offset": 0, 00:09:40.915 "data_size": 65536 00:09:40.915 } 00:09:40.915 ] 00:09:40.915 }' 00:09:40.915 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.915 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.499 [2024-12-07 01:53:46.840584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:41.499 [2024-12-07 01:53:46.840631] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:41.499 [2024-12-07 01:53:46.840639] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:41.499 [2024-12-07 01:53:46.840915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:41.499 [2024-12-07 01:53:46.841033] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:41.499 [2024-12-07 01:53:46.841047] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:41.499 [2024-12-07 01:53:46.841211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.499 NewBaseBdev 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.499 [ 00:09:41.499 { 00:09:41.499 "name": "NewBaseBdev", 00:09:41.499 "aliases": [ 00:09:41.499 "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a" 00:09:41.499 ], 00:09:41.499 "product_name": "Malloc disk", 00:09:41.499 "block_size": 512, 00:09:41.499 "num_blocks": 65536, 00:09:41.499 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:41.499 "assigned_rate_limits": { 00:09:41.499 "rw_ios_per_sec": 0, 00:09:41.499 "rw_mbytes_per_sec": 0, 00:09:41.499 "r_mbytes_per_sec": 0, 00:09:41.499 "w_mbytes_per_sec": 0 00:09:41.499 }, 00:09:41.499 "claimed": true, 00:09:41.499 "claim_type": "exclusive_write", 00:09:41.499 "zoned": false, 00:09:41.499 "supported_io_types": { 00:09:41.499 "read": true, 00:09:41.499 "write": true, 00:09:41.499 "unmap": true, 00:09:41.499 "flush": true, 00:09:41.499 "reset": true, 00:09:41.499 "nvme_admin": false, 00:09:41.499 "nvme_io": false, 00:09:41.499 "nvme_io_md": false, 00:09:41.499 "write_zeroes": true, 00:09:41.499 "zcopy": true, 00:09:41.499 "get_zone_info": false, 00:09:41.499 "zone_management": false, 00:09:41.499 "zone_append": false, 00:09:41.499 "compare": false, 00:09:41.499 "compare_and_write": false, 00:09:41.499 "abort": true, 00:09:41.499 "seek_hole": false, 00:09:41.499 "seek_data": false, 00:09:41.499 "copy": true, 00:09:41.499 "nvme_iov_md": false 00:09:41.499 }, 00:09:41.499 "memory_domains": [ 00:09:41.499 { 00:09:41.499 "dma_device_id": "system", 00:09:41.499 "dma_device_type": 1 00:09:41.499 }, 00:09:41.499 { 00:09:41.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.499 "dma_device_type": 2 00:09:41.499 } 00:09:41.499 ], 00:09:41.499 "driver_specific": {} 00:09:41.499 } 00:09:41.499 ] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.499 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.500 "name": "Existed_Raid", 00:09:41.500 "uuid": "d1b291c9-9e1a-410e-85ee-db22d63f78ea", 00:09:41.500 "strip_size_kb": 64, 00:09:41.500 "state": "online", 00:09:41.500 "raid_level": "concat", 00:09:41.500 "superblock": false, 00:09:41.500 "num_base_bdevs": 4, 00:09:41.500 "num_base_bdevs_discovered": 4, 00:09:41.500 "num_base_bdevs_operational": 4, 00:09:41.500 "base_bdevs_list": [ 00:09:41.500 { 00:09:41.500 "name": "NewBaseBdev", 00:09:41.500 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:41.500 "is_configured": true, 00:09:41.500 "data_offset": 0, 00:09:41.500 "data_size": 65536 00:09:41.500 }, 00:09:41.500 { 00:09:41.500 "name": "BaseBdev2", 00:09:41.500 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:41.500 "is_configured": true, 00:09:41.500 "data_offset": 0, 00:09:41.500 "data_size": 65536 00:09:41.500 }, 00:09:41.500 { 00:09:41.500 "name": "BaseBdev3", 00:09:41.500 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:41.500 "is_configured": true, 00:09:41.500 "data_offset": 0, 00:09:41.500 "data_size": 65536 00:09:41.500 }, 00:09:41.500 { 00:09:41.500 "name": "BaseBdev4", 00:09:41.500 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:41.500 "is_configured": true, 00:09:41.500 "data_offset": 0, 00:09:41.500 "data_size": 65536 00:09:41.500 } 00:09:41.500 ] 00:09:41.500 }' 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.500 01:53:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.067 [2024-12-07 01:53:47.324102] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.067 "name": "Existed_Raid", 00:09:42.067 "aliases": [ 00:09:42.067 "d1b291c9-9e1a-410e-85ee-db22d63f78ea" 00:09:42.067 ], 00:09:42.067 "product_name": "Raid Volume", 00:09:42.067 "block_size": 512, 00:09:42.067 "num_blocks": 262144, 00:09:42.067 "uuid": "d1b291c9-9e1a-410e-85ee-db22d63f78ea", 00:09:42.067 "assigned_rate_limits": { 00:09:42.067 "rw_ios_per_sec": 0, 00:09:42.067 "rw_mbytes_per_sec": 0, 00:09:42.067 "r_mbytes_per_sec": 0, 00:09:42.067 "w_mbytes_per_sec": 0 00:09:42.067 }, 00:09:42.067 "claimed": false, 00:09:42.067 "zoned": false, 00:09:42.067 "supported_io_types": { 00:09:42.067 "read": true, 00:09:42.067 "write": true, 00:09:42.067 "unmap": true, 00:09:42.067 "flush": true, 00:09:42.067 "reset": true, 00:09:42.067 "nvme_admin": false, 00:09:42.067 "nvme_io": false, 00:09:42.067 "nvme_io_md": false, 00:09:42.067 "write_zeroes": true, 00:09:42.067 "zcopy": false, 00:09:42.067 "get_zone_info": false, 00:09:42.067 "zone_management": false, 00:09:42.067 "zone_append": false, 00:09:42.067 "compare": false, 00:09:42.067 "compare_and_write": false, 00:09:42.067 "abort": false, 00:09:42.067 "seek_hole": false, 00:09:42.067 "seek_data": false, 00:09:42.067 "copy": false, 00:09:42.067 "nvme_iov_md": false 00:09:42.067 }, 00:09:42.067 "memory_domains": [ 00:09:42.067 { 00:09:42.067 "dma_device_id": "system", 00:09:42.067 "dma_device_type": 1 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.067 "dma_device_type": 2 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "system", 00:09:42.067 "dma_device_type": 1 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.067 "dma_device_type": 2 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "system", 00:09:42.067 "dma_device_type": 1 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.067 "dma_device_type": 2 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "system", 00:09:42.067 "dma_device_type": 1 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.067 "dma_device_type": 2 00:09:42.067 } 00:09:42.067 ], 00:09:42.067 "driver_specific": { 00:09:42.067 "raid": { 00:09:42.067 "uuid": "d1b291c9-9e1a-410e-85ee-db22d63f78ea", 00:09:42.067 "strip_size_kb": 64, 00:09:42.067 "state": "online", 00:09:42.067 "raid_level": "concat", 00:09:42.067 "superblock": false, 00:09:42.067 "num_base_bdevs": 4, 00:09:42.067 "num_base_bdevs_discovered": 4, 00:09:42.067 "num_base_bdevs_operational": 4, 00:09:42.067 "base_bdevs_list": [ 00:09:42.067 { 00:09:42.067 "name": "NewBaseBdev", 00:09:42.067 "uuid": "8b1f8c2c-fbbb-4fbb-adf1-36c244bf6f6a", 00:09:42.067 "is_configured": true, 00:09:42.067 "data_offset": 0, 00:09:42.067 "data_size": 65536 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "name": "BaseBdev2", 00:09:42.067 "uuid": "9ab37a7b-0666-41e2-a7ee-8b3aefd0dda7", 00:09:42.067 "is_configured": true, 00:09:42.067 "data_offset": 0, 00:09:42.067 "data_size": 65536 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "name": "BaseBdev3", 00:09:42.067 "uuid": "fb0510a2-43ba-4b24-bf84-cb8b89116860", 00:09:42.067 "is_configured": true, 00:09:42.067 "data_offset": 0, 00:09:42.067 "data_size": 65536 00:09:42.067 }, 00:09:42.067 { 00:09:42.067 "name": "BaseBdev4", 00:09:42.067 "uuid": "0720f35f-a9e2-43e8-961e-8e7cce70fb87", 00:09:42.067 "is_configured": true, 00:09:42.067 "data_offset": 0, 00:09:42.067 "data_size": 65536 00:09:42.067 } 00:09:42.067 ] 00:09:42.067 } 00:09:42.067 } 00:09:42.067 }' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:42.067 BaseBdev2 00:09:42.067 BaseBdev3 00:09:42.067 BaseBdev4' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.067 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.326 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.326 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.326 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.326 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.327 [2024-12-07 01:53:47.615298] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.327 [2024-12-07 01:53:47.615328] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.327 [2024-12-07 01:53:47.615394] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.327 [2024-12-07 01:53:47.615458] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.327 [2024-12-07 01:53:47.615474] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81894 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 81894 ']' 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 81894 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81894 00:09:42.327 killing process with pid 81894 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81894' 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 81894 00:09:42.327 [2024-12-07 01:53:47.661313] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.327 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 81894 00:09:42.327 [2024-12-07 01:53:47.702391] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.587 ************************************ 00:09:42.587 END TEST raid_state_function_test 00:09:42.587 ************************************ 00:09:42.587 01:53:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:42.587 00:09:42.587 real 0m9.304s 00:09:42.587 user 0m15.961s 00:09:42.587 sys 0m1.862s 00:09:42.587 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.587 01:53:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.587 01:53:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:42.587 01:53:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:42.587 01:53:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.587 01:53:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.587 ************************************ 00:09:42.587 START TEST raid_state_function_test_sb 00:09:42.587 ************************************ 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82539 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82539' 00:09:42.587 Process raid pid: 82539 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82539 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82539 ']' 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:42.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:42.587 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.847 [2024-12-07 01:53:48.096139] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:42.847 [2024-12-07 01:53:48.096251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.847 [2024-12-07 01:53:48.239532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.847 [2024-12-07 01:53:48.286966] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.106 [2024-12-07 01:53:48.328786] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.106 [2024-12-07 01:53:48.328822] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.687 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:43.687 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:43.687 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.688 [2024-12-07 01:53:48.929816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.688 [2024-12-07 01:53:48.929865] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.688 [2024-12-07 01:53:48.929885] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.688 [2024-12-07 01:53:48.929895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.688 [2024-12-07 01:53:48.929901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.688 [2024-12-07 01:53:48.929911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.688 [2024-12-07 01:53:48.929917] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:43.688 [2024-12-07 01:53:48.929926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.688 "name": "Existed_Raid", 00:09:43.688 "uuid": "82690983-195e-45b5-9876-0305a5526060", 00:09:43.688 "strip_size_kb": 64, 00:09:43.688 "state": "configuring", 00:09:43.688 "raid_level": "concat", 00:09:43.688 "superblock": true, 00:09:43.688 "num_base_bdevs": 4, 00:09:43.688 "num_base_bdevs_discovered": 0, 00:09:43.688 "num_base_bdevs_operational": 4, 00:09:43.688 "base_bdevs_list": [ 00:09:43.688 { 00:09:43.688 "name": "BaseBdev1", 00:09:43.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.688 "is_configured": false, 00:09:43.688 "data_offset": 0, 00:09:43.688 "data_size": 0 00:09:43.688 }, 00:09:43.688 { 00:09:43.688 "name": "BaseBdev2", 00:09:43.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.688 "is_configured": false, 00:09:43.688 "data_offset": 0, 00:09:43.688 "data_size": 0 00:09:43.688 }, 00:09:43.688 { 00:09:43.688 "name": "BaseBdev3", 00:09:43.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.688 "is_configured": false, 00:09:43.688 "data_offset": 0, 00:09:43.688 "data_size": 0 00:09:43.688 }, 00:09:43.688 { 00:09:43.688 "name": "BaseBdev4", 00:09:43.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.688 "is_configured": false, 00:09:43.688 "data_offset": 0, 00:09:43.688 "data_size": 0 00:09:43.688 } 00:09:43.688 ] 00:09:43.688 }' 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.688 01:53:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.946 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:43.946 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.946 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.947 [2024-12-07 01:53:49.364931] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.947 [2024-12-07 01:53:49.364974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.947 [2024-12-07 01:53:49.376930] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.947 [2024-12-07 01:53:49.376969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.947 [2024-12-07 01:53:49.376977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.947 [2024-12-07 01:53:49.376986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.947 [2024-12-07 01:53:49.376993] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:43.947 [2024-12-07 01:53:49.377001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:43.947 [2024-12-07 01:53:49.377008] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:43.947 [2024-12-07 01:53:49.377033] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.947 [2024-12-07 01:53:49.393968] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.947 BaseBdev1 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.947 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.205 [ 00:09:44.205 { 00:09:44.205 "name": "BaseBdev1", 00:09:44.205 "aliases": [ 00:09:44.205 "11701be1-5ea3-4f04-a1e3-480da649ca2a" 00:09:44.205 ], 00:09:44.205 "product_name": "Malloc disk", 00:09:44.205 "block_size": 512, 00:09:44.205 "num_blocks": 65536, 00:09:44.205 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:44.205 "assigned_rate_limits": { 00:09:44.205 "rw_ios_per_sec": 0, 00:09:44.205 "rw_mbytes_per_sec": 0, 00:09:44.205 "r_mbytes_per_sec": 0, 00:09:44.205 "w_mbytes_per_sec": 0 00:09:44.205 }, 00:09:44.205 "claimed": true, 00:09:44.205 "claim_type": "exclusive_write", 00:09:44.205 "zoned": false, 00:09:44.205 "supported_io_types": { 00:09:44.205 "read": true, 00:09:44.205 "write": true, 00:09:44.205 "unmap": true, 00:09:44.205 "flush": true, 00:09:44.205 "reset": true, 00:09:44.205 "nvme_admin": false, 00:09:44.205 "nvme_io": false, 00:09:44.205 "nvme_io_md": false, 00:09:44.205 "write_zeroes": true, 00:09:44.205 "zcopy": true, 00:09:44.205 "get_zone_info": false, 00:09:44.205 "zone_management": false, 00:09:44.205 "zone_append": false, 00:09:44.205 "compare": false, 00:09:44.205 "compare_and_write": false, 00:09:44.205 "abort": true, 00:09:44.205 "seek_hole": false, 00:09:44.205 "seek_data": false, 00:09:44.205 "copy": true, 00:09:44.205 "nvme_iov_md": false 00:09:44.205 }, 00:09:44.205 "memory_domains": [ 00:09:44.205 { 00:09:44.205 "dma_device_id": "system", 00:09:44.205 "dma_device_type": 1 00:09:44.205 }, 00:09:44.205 { 00:09:44.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.205 "dma_device_type": 2 00:09:44.205 } 00:09:44.205 ], 00:09:44.205 "driver_specific": {} 00:09:44.205 } 00:09:44.205 ] 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.205 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.206 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.206 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.206 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.206 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.206 "name": "Existed_Raid", 00:09:44.206 "uuid": "ca3c2df9-aa86-48a7-85fd-b57ac19cdec9", 00:09:44.206 "strip_size_kb": 64, 00:09:44.206 "state": "configuring", 00:09:44.206 "raid_level": "concat", 00:09:44.206 "superblock": true, 00:09:44.206 "num_base_bdevs": 4, 00:09:44.206 "num_base_bdevs_discovered": 1, 00:09:44.206 "num_base_bdevs_operational": 4, 00:09:44.206 "base_bdevs_list": [ 00:09:44.206 { 00:09:44.206 "name": "BaseBdev1", 00:09:44.206 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:44.206 "is_configured": true, 00:09:44.206 "data_offset": 2048, 00:09:44.206 "data_size": 63488 00:09:44.206 }, 00:09:44.206 { 00:09:44.206 "name": "BaseBdev2", 00:09:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.206 "is_configured": false, 00:09:44.206 "data_offset": 0, 00:09:44.206 "data_size": 0 00:09:44.206 }, 00:09:44.206 { 00:09:44.206 "name": "BaseBdev3", 00:09:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.206 "is_configured": false, 00:09:44.206 "data_offset": 0, 00:09:44.206 "data_size": 0 00:09:44.206 }, 00:09:44.206 { 00:09:44.206 "name": "BaseBdev4", 00:09:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.206 "is_configured": false, 00:09:44.206 "data_offset": 0, 00:09:44.206 "data_size": 0 00:09:44.206 } 00:09:44.206 ] 00:09:44.206 }' 00:09:44.206 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.206 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.464 [2024-12-07 01:53:49.885166] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.464 [2024-12-07 01:53:49.885226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.464 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 [2024-12-07 01:53:49.897208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.465 [2024-12-07 01:53:49.899102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.465 [2024-12-07 01:53:49.899143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.465 [2024-12-07 01:53:49.899152] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.465 [2024-12-07 01:53:49.899160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.465 [2024-12-07 01:53:49.899166] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:44.465 [2024-12-07 01:53:49.899174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.465 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.723 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.723 "name": "Existed_Raid", 00:09:44.723 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:44.723 "strip_size_kb": 64, 00:09:44.723 "state": "configuring", 00:09:44.723 "raid_level": "concat", 00:09:44.723 "superblock": true, 00:09:44.723 "num_base_bdevs": 4, 00:09:44.723 "num_base_bdevs_discovered": 1, 00:09:44.723 "num_base_bdevs_operational": 4, 00:09:44.723 "base_bdevs_list": [ 00:09:44.723 { 00:09:44.723 "name": "BaseBdev1", 00:09:44.723 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:44.723 "is_configured": true, 00:09:44.723 "data_offset": 2048, 00:09:44.723 "data_size": 63488 00:09:44.723 }, 00:09:44.723 { 00:09:44.723 "name": "BaseBdev2", 00:09:44.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.723 "is_configured": false, 00:09:44.723 "data_offset": 0, 00:09:44.723 "data_size": 0 00:09:44.723 }, 00:09:44.723 { 00:09:44.723 "name": "BaseBdev3", 00:09:44.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.723 "is_configured": false, 00:09:44.723 "data_offset": 0, 00:09:44.723 "data_size": 0 00:09:44.723 }, 00:09:44.723 { 00:09:44.723 "name": "BaseBdev4", 00:09:44.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.723 "is_configured": false, 00:09:44.723 "data_offset": 0, 00:09:44.723 "data_size": 0 00:09:44.723 } 00:09:44.723 ] 00:09:44.723 }' 00:09:44.723 01:53:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.723 01:53:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.982 [2024-12-07 01:53:50.342949] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.982 BaseBdev2 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.982 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.982 [ 00:09:44.982 { 00:09:44.982 "name": "BaseBdev2", 00:09:44.982 "aliases": [ 00:09:44.982 "be684cd9-912d-44d9-96c1-48958c744595" 00:09:44.982 ], 00:09:44.982 "product_name": "Malloc disk", 00:09:44.982 "block_size": 512, 00:09:44.982 "num_blocks": 65536, 00:09:44.982 "uuid": "be684cd9-912d-44d9-96c1-48958c744595", 00:09:44.983 "assigned_rate_limits": { 00:09:44.983 "rw_ios_per_sec": 0, 00:09:44.983 "rw_mbytes_per_sec": 0, 00:09:44.983 "r_mbytes_per_sec": 0, 00:09:44.983 "w_mbytes_per_sec": 0 00:09:44.983 }, 00:09:44.983 "claimed": true, 00:09:44.983 "claim_type": "exclusive_write", 00:09:44.983 "zoned": false, 00:09:44.983 "supported_io_types": { 00:09:44.983 "read": true, 00:09:44.983 "write": true, 00:09:44.983 "unmap": true, 00:09:44.983 "flush": true, 00:09:44.983 "reset": true, 00:09:44.983 "nvme_admin": false, 00:09:44.983 "nvme_io": false, 00:09:44.983 "nvme_io_md": false, 00:09:44.983 "write_zeroes": true, 00:09:44.983 "zcopy": true, 00:09:44.983 "get_zone_info": false, 00:09:44.983 "zone_management": false, 00:09:44.983 "zone_append": false, 00:09:44.983 "compare": false, 00:09:44.983 "compare_and_write": false, 00:09:44.983 "abort": true, 00:09:44.983 "seek_hole": false, 00:09:44.983 "seek_data": false, 00:09:44.983 "copy": true, 00:09:44.983 "nvme_iov_md": false 00:09:44.983 }, 00:09:44.983 "memory_domains": [ 00:09:44.983 { 00:09:44.983 "dma_device_id": "system", 00:09:44.983 "dma_device_type": 1 00:09:44.983 }, 00:09:44.983 { 00:09:44.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.983 "dma_device_type": 2 00:09:44.983 } 00:09:44.983 ], 00:09:44.983 "driver_specific": {} 00:09:44.983 } 00:09:44.983 ] 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.983 "name": "Existed_Raid", 00:09:44.983 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:44.983 "strip_size_kb": 64, 00:09:44.983 "state": "configuring", 00:09:44.983 "raid_level": "concat", 00:09:44.983 "superblock": true, 00:09:44.983 "num_base_bdevs": 4, 00:09:44.983 "num_base_bdevs_discovered": 2, 00:09:44.983 "num_base_bdevs_operational": 4, 00:09:44.983 "base_bdevs_list": [ 00:09:44.983 { 00:09:44.983 "name": "BaseBdev1", 00:09:44.983 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:44.983 "is_configured": true, 00:09:44.983 "data_offset": 2048, 00:09:44.983 "data_size": 63488 00:09:44.983 }, 00:09:44.983 { 00:09:44.983 "name": "BaseBdev2", 00:09:44.983 "uuid": "be684cd9-912d-44d9-96c1-48958c744595", 00:09:44.983 "is_configured": true, 00:09:44.983 "data_offset": 2048, 00:09:44.983 "data_size": 63488 00:09:44.983 }, 00:09:44.983 { 00:09:44.983 "name": "BaseBdev3", 00:09:44.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.983 "is_configured": false, 00:09:44.983 "data_offset": 0, 00:09:44.983 "data_size": 0 00:09:44.983 }, 00:09:44.983 { 00:09:44.983 "name": "BaseBdev4", 00:09:44.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.983 "is_configured": false, 00:09:44.983 "data_offset": 0, 00:09:44.983 "data_size": 0 00:09:44.983 } 00:09:44.983 ] 00:09:44.983 }' 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.983 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 [2024-12-07 01:53:50.820972] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:45.549 BaseBdev3 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 [ 00:09:45.549 { 00:09:45.549 "name": "BaseBdev3", 00:09:45.549 "aliases": [ 00:09:45.549 "8817b4f4-600c-492d-aaa0-3f68d2a735d7" 00:09:45.549 ], 00:09:45.549 "product_name": "Malloc disk", 00:09:45.549 "block_size": 512, 00:09:45.549 "num_blocks": 65536, 00:09:45.549 "uuid": "8817b4f4-600c-492d-aaa0-3f68d2a735d7", 00:09:45.549 "assigned_rate_limits": { 00:09:45.549 "rw_ios_per_sec": 0, 00:09:45.549 "rw_mbytes_per_sec": 0, 00:09:45.549 "r_mbytes_per_sec": 0, 00:09:45.549 "w_mbytes_per_sec": 0 00:09:45.549 }, 00:09:45.549 "claimed": true, 00:09:45.549 "claim_type": "exclusive_write", 00:09:45.549 "zoned": false, 00:09:45.549 "supported_io_types": { 00:09:45.549 "read": true, 00:09:45.549 "write": true, 00:09:45.549 "unmap": true, 00:09:45.549 "flush": true, 00:09:45.549 "reset": true, 00:09:45.549 "nvme_admin": false, 00:09:45.549 "nvme_io": false, 00:09:45.549 "nvme_io_md": false, 00:09:45.549 "write_zeroes": true, 00:09:45.549 "zcopy": true, 00:09:45.549 "get_zone_info": false, 00:09:45.549 "zone_management": false, 00:09:45.549 "zone_append": false, 00:09:45.549 "compare": false, 00:09:45.549 "compare_and_write": false, 00:09:45.549 "abort": true, 00:09:45.549 "seek_hole": false, 00:09:45.549 "seek_data": false, 00:09:45.549 "copy": true, 00:09:45.549 "nvme_iov_md": false 00:09:45.549 }, 00:09:45.549 "memory_domains": [ 00:09:45.549 { 00:09:45.549 "dma_device_id": "system", 00:09:45.549 "dma_device_type": 1 00:09:45.549 }, 00:09:45.549 { 00:09:45.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.549 "dma_device_type": 2 00:09:45.549 } 00:09:45.549 ], 00:09:45.549 "driver_specific": {} 00:09:45.549 } 00:09:45.549 ] 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.549 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.550 "name": "Existed_Raid", 00:09:45.550 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:45.550 "strip_size_kb": 64, 00:09:45.550 "state": "configuring", 00:09:45.550 "raid_level": "concat", 00:09:45.550 "superblock": true, 00:09:45.550 "num_base_bdevs": 4, 00:09:45.550 "num_base_bdevs_discovered": 3, 00:09:45.550 "num_base_bdevs_operational": 4, 00:09:45.550 "base_bdevs_list": [ 00:09:45.550 { 00:09:45.550 "name": "BaseBdev1", 00:09:45.550 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:45.550 "is_configured": true, 00:09:45.550 "data_offset": 2048, 00:09:45.550 "data_size": 63488 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "name": "BaseBdev2", 00:09:45.550 "uuid": "be684cd9-912d-44d9-96c1-48958c744595", 00:09:45.550 "is_configured": true, 00:09:45.550 "data_offset": 2048, 00:09:45.550 "data_size": 63488 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "name": "BaseBdev3", 00:09:45.550 "uuid": "8817b4f4-600c-492d-aaa0-3f68d2a735d7", 00:09:45.550 "is_configured": true, 00:09:45.550 "data_offset": 2048, 00:09:45.550 "data_size": 63488 00:09:45.550 }, 00:09:45.550 { 00:09:45.550 "name": "BaseBdev4", 00:09:45.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.550 "is_configured": false, 00:09:45.550 "data_offset": 0, 00:09:45.550 "data_size": 0 00:09:45.550 } 00:09:45.550 ] 00:09:45.550 }' 00:09:45.550 01:53:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.550 01:53:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.809 [2024-12-07 01:53:51.239044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:45.809 [2024-12-07 01:53:51.239274] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:45.809 [2024-12-07 01:53:51.239290] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:45.809 [2024-12-07 01:53:51.239559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:45.809 BaseBdev4 00:09:45.809 [2024-12-07 01:53:51.239714] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:45.809 [2024-12-07 01:53:51.239742] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:45.809 [2024-12-07 01:53:51.239855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.809 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.809 [ 00:09:45.809 { 00:09:45.809 "name": "BaseBdev4", 00:09:45.809 "aliases": [ 00:09:45.809 "c2c4b401-e377-498b-b6c2-797ca200355c" 00:09:45.809 ], 00:09:45.809 "product_name": "Malloc disk", 00:09:45.809 "block_size": 512, 00:09:45.809 "num_blocks": 65536, 00:09:45.809 "uuid": "c2c4b401-e377-498b-b6c2-797ca200355c", 00:09:45.809 "assigned_rate_limits": { 00:09:45.809 "rw_ios_per_sec": 0, 00:09:45.809 "rw_mbytes_per_sec": 0, 00:09:45.809 "r_mbytes_per_sec": 0, 00:09:45.809 "w_mbytes_per_sec": 0 00:09:45.809 }, 00:09:45.809 "claimed": true, 00:09:45.809 "claim_type": "exclusive_write", 00:09:45.809 "zoned": false, 00:09:45.809 "supported_io_types": { 00:09:45.809 "read": true, 00:09:45.809 "write": true, 00:09:45.809 "unmap": true, 00:09:45.809 "flush": true, 00:09:45.809 "reset": true, 00:09:45.809 "nvme_admin": false, 00:09:45.809 "nvme_io": false, 00:09:46.068 "nvme_io_md": false, 00:09:46.068 "write_zeroes": true, 00:09:46.068 "zcopy": true, 00:09:46.068 "get_zone_info": false, 00:09:46.068 "zone_management": false, 00:09:46.068 "zone_append": false, 00:09:46.068 "compare": false, 00:09:46.068 "compare_and_write": false, 00:09:46.068 "abort": true, 00:09:46.068 "seek_hole": false, 00:09:46.068 "seek_data": false, 00:09:46.068 "copy": true, 00:09:46.068 "nvme_iov_md": false 00:09:46.068 }, 00:09:46.068 "memory_domains": [ 00:09:46.068 { 00:09:46.068 "dma_device_id": "system", 00:09:46.068 "dma_device_type": 1 00:09:46.068 }, 00:09:46.068 { 00:09:46.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.068 "dma_device_type": 2 00:09:46.068 } 00:09:46.068 ], 00:09:46.068 "driver_specific": {} 00:09:46.068 } 00:09:46.068 ] 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.068 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.068 "name": "Existed_Raid", 00:09:46.068 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:46.068 "strip_size_kb": 64, 00:09:46.068 "state": "online", 00:09:46.068 "raid_level": "concat", 00:09:46.068 "superblock": true, 00:09:46.068 "num_base_bdevs": 4, 00:09:46.068 "num_base_bdevs_discovered": 4, 00:09:46.068 "num_base_bdevs_operational": 4, 00:09:46.068 "base_bdevs_list": [ 00:09:46.068 { 00:09:46.068 "name": "BaseBdev1", 00:09:46.068 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:46.068 "is_configured": true, 00:09:46.068 "data_offset": 2048, 00:09:46.068 "data_size": 63488 00:09:46.068 }, 00:09:46.068 { 00:09:46.068 "name": "BaseBdev2", 00:09:46.068 "uuid": "be684cd9-912d-44d9-96c1-48958c744595", 00:09:46.068 "is_configured": true, 00:09:46.068 "data_offset": 2048, 00:09:46.068 "data_size": 63488 00:09:46.068 }, 00:09:46.068 { 00:09:46.068 "name": "BaseBdev3", 00:09:46.068 "uuid": "8817b4f4-600c-492d-aaa0-3f68d2a735d7", 00:09:46.068 "is_configured": true, 00:09:46.068 "data_offset": 2048, 00:09:46.068 "data_size": 63488 00:09:46.068 }, 00:09:46.068 { 00:09:46.068 "name": "BaseBdev4", 00:09:46.068 "uuid": "c2c4b401-e377-498b-b6c2-797ca200355c", 00:09:46.068 "is_configured": true, 00:09:46.068 "data_offset": 2048, 00:09:46.068 "data_size": 63488 00:09:46.068 } 00:09:46.068 ] 00:09:46.068 }' 00:09:46.069 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.069 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.328 [2024-12-07 01:53:51.726579] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.328 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.328 "name": "Existed_Raid", 00:09:46.328 "aliases": [ 00:09:46.328 "715e0767-807e-4936-81b5-0ffc198b7c79" 00:09:46.328 ], 00:09:46.328 "product_name": "Raid Volume", 00:09:46.328 "block_size": 512, 00:09:46.328 "num_blocks": 253952, 00:09:46.328 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:46.328 "assigned_rate_limits": { 00:09:46.328 "rw_ios_per_sec": 0, 00:09:46.328 "rw_mbytes_per_sec": 0, 00:09:46.328 "r_mbytes_per_sec": 0, 00:09:46.328 "w_mbytes_per_sec": 0 00:09:46.328 }, 00:09:46.328 "claimed": false, 00:09:46.328 "zoned": false, 00:09:46.328 "supported_io_types": { 00:09:46.328 "read": true, 00:09:46.328 "write": true, 00:09:46.328 "unmap": true, 00:09:46.328 "flush": true, 00:09:46.328 "reset": true, 00:09:46.328 "nvme_admin": false, 00:09:46.328 "nvme_io": false, 00:09:46.328 "nvme_io_md": false, 00:09:46.328 "write_zeroes": true, 00:09:46.328 "zcopy": false, 00:09:46.328 "get_zone_info": false, 00:09:46.328 "zone_management": false, 00:09:46.328 "zone_append": false, 00:09:46.328 "compare": false, 00:09:46.328 "compare_and_write": false, 00:09:46.328 "abort": false, 00:09:46.328 "seek_hole": false, 00:09:46.328 "seek_data": false, 00:09:46.328 "copy": false, 00:09:46.328 "nvme_iov_md": false 00:09:46.328 }, 00:09:46.328 "memory_domains": [ 00:09:46.328 { 00:09:46.328 "dma_device_id": "system", 00:09:46.328 "dma_device_type": 1 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.328 "dma_device_type": 2 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "system", 00:09:46.328 "dma_device_type": 1 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.328 "dma_device_type": 2 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "system", 00:09:46.328 "dma_device_type": 1 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.328 "dma_device_type": 2 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "system", 00:09:46.328 "dma_device_type": 1 00:09:46.328 }, 00:09:46.328 { 00:09:46.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.328 "dma_device_type": 2 00:09:46.328 } 00:09:46.328 ], 00:09:46.328 "driver_specific": { 00:09:46.328 "raid": { 00:09:46.329 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:46.329 "strip_size_kb": 64, 00:09:46.329 "state": "online", 00:09:46.329 "raid_level": "concat", 00:09:46.329 "superblock": true, 00:09:46.329 "num_base_bdevs": 4, 00:09:46.329 "num_base_bdevs_discovered": 4, 00:09:46.329 "num_base_bdevs_operational": 4, 00:09:46.329 "base_bdevs_list": [ 00:09:46.329 { 00:09:46.329 "name": "BaseBdev1", 00:09:46.329 "uuid": "11701be1-5ea3-4f04-a1e3-480da649ca2a", 00:09:46.329 "is_configured": true, 00:09:46.329 "data_offset": 2048, 00:09:46.329 "data_size": 63488 00:09:46.329 }, 00:09:46.329 { 00:09:46.329 "name": "BaseBdev2", 00:09:46.329 "uuid": "be684cd9-912d-44d9-96c1-48958c744595", 00:09:46.329 "is_configured": true, 00:09:46.329 "data_offset": 2048, 00:09:46.329 "data_size": 63488 00:09:46.329 }, 00:09:46.329 { 00:09:46.329 "name": "BaseBdev3", 00:09:46.329 "uuid": "8817b4f4-600c-492d-aaa0-3f68d2a735d7", 00:09:46.329 "is_configured": true, 00:09:46.329 "data_offset": 2048, 00:09:46.329 "data_size": 63488 00:09:46.329 }, 00:09:46.329 { 00:09:46.329 "name": "BaseBdev4", 00:09:46.329 "uuid": "c2c4b401-e377-498b-b6c2-797ca200355c", 00:09:46.329 "is_configured": true, 00:09:46.329 "data_offset": 2048, 00:09:46.329 "data_size": 63488 00:09:46.329 } 00:09:46.329 ] 00:09:46.329 } 00:09:46.329 } 00:09:46.329 }' 00:09:46.329 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.588 BaseBdev2 00:09:46.588 BaseBdev3 00:09:46.588 BaseBdev4' 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.588 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.589 01:53:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.589 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.848 [2024-12-07 01:53:52.053754] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.848 [2024-12-07 01:53:52.053784] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.848 [2024-12-07 01:53:52.053843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.848 "name": "Existed_Raid", 00:09:46.848 "uuid": "715e0767-807e-4936-81b5-0ffc198b7c79", 00:09:46.848 "strip_size_kb": 64, 00:09:46.848 "state": "offline", 00:09:46.848 "raid_level": "concat", 00:09:46.848 "superblock": true, 00:09:46.848 "num_base_bdevs": 4, 00:09:46.848 "num_base_bdevs_discovered": 3, 00:09:46.848 "num_base_bdevs_operational": 3, 00:09:46.848 "base_bdevs_list": [ 00:09:46.848 { 00:09:46.848 "name": null, 00:09:46.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.848 "is_configured": false, 00:09:46.848 "data_offset": 0, 00:09:46.848 "data_size": 63488 00:09:46.848 }, 00:09:46.848 { 00:09:46.848 "name": "BaseBdev2", 00:09:46.848 "uuid": "be684cd9-912d-44d9-96c1-48958c744595", 00:09:46.848 "is_configured": true, 00:09:46.848 "data_offset": 2048, 00:09:46.848 "data_size": 63488 00:09:46.848 }, 00:09:46.848 { 00:09:46.848 "name": "BaseBdev3", 00:09:46.848 "uuid": "8817b4f4-600c-492d-aaa0-3f68d2a735d7", 00:09:46.848 "is_configured": true, 00:09:46.848 "data_offset": 2048, 00:09:46.848 "data_size": 63488 00:09:46.848 }, 00:09:46.848 { 00:09:46.848 "name": "BaseBdev4", 00:09:46.848 "uuid": "c2c4b401-e377-498b-b6c2-797ca200355c", 00:09:46.848 "is_configured": true, 00:09:46.848 "data_offset": 2048, 00:09:46.848 "data_size": 63488 00:09:46.848 } 00:09:46.848 ] 00:09:46.848 }' 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.848 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.107 [2024-12-07 01:53:52.540267] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.107 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 [2024-12-07 01:53:52.603522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 [2024-12-07 01:53:52.670977] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:47.366 [2024-12-07 01:53:52.671085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 BaseBdev2 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.366 [ 00:09:47.366 { 00:09:47.366 "name": "BaseBdev2", 00:09:47.366 "aliases": [ 00:09:47.366 "35f40e29-8de4-4680-a1f6-f7180a5e82f5" 00:09:47.366 ], 00:09:47.366 "product_name": "Malloc disk", 00:09:47.366 "block_size": 512, 00:09:47.366 "num_blocks": 65536, 00:09:47.366 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:47.366 "assigned_rate_limits": { 00:09:47.366 "rw_ios_per_sec": 0, 00:09:47.366 "rw_mbytes_per_sec": 0, 00:09:47.366 "r_mbytes_per_sec": 0, 00:09:47.366 "w_mbytes_per_sec": 0 00:09:47.366 }, 00:09:47.366 "claimed": false, 00:09:47.366 "zoned": false, 00:09:47.366 "supported_io_types": { 00:09:47.366 "read": true, 00:09:47.366 "write": true, 00:09:47.366 "unmap": true, 00:09:47.366 "flush": true, 00:09:47.366 "reset": true, 00:09:47.366 "nvme_admin": false, 00:09:47.366 "nvme_io": false, 00:09:47.366 "nvme_io_md": false, 00:09:47.366 "write_zeroes": true, 00:09:47.366 "zcopy": true, 00:09:47.366 "get_zone_info": false, 00:09:47.366 "zone_management": false, 00:09:47.366 "zone_append": false, 00:09:47.366 "compare": false, 00:09:47.366 "compare_and_write": false, 00:09:47.366 "abort": true, 00:09:47.366 "seek_hole": false, 00:09:47.366 "seek_data": false, 00:09:47.366 "copy": true, 00:09:47.366 "nvme_iov_md": false 00:09:47.366 }, 00:09:47.366 "memory_domains": [ 00:09:47.366 { 00:09:47.366 "dma_device_id": "system", 00:09:47.366 "dma_device_type": 1 00:09:47.366 }, 00:09:47.366 { 00:09:47.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.366 "dma_device_type": 2 00:09:47.366 } 00:09:47.366 ], 00:09:47.366 "driver_specific": {} 00:09:47.366 } 00:09:47.366 ] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.366 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.367 BaseBdev3 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.367 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.626 [ 00:09:47.626 { 00:09:47.626 "name": "BaseBdev3", 00:09:47.626 "aliases": [ 00:09:47.626 "33584fc0-73a0-4e09-b9c6-0ffe34ed6152" 00:09:47.626 ], 00:09:47.626 "product_name": "Malloc disk", 00:09:47.626 "block_size": 512, 00:09:47.626 "num_blocks": 65536, 00:09:47.626 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:47.626 "assigned_rate_limits": { 00:09:47.626 "rw_ios_per_sec": 0, 00:09:47.626 "rw_mbytes_per_sec": 0, 00:09:47.626 "r_mbytes_per_sec": 0, 00:09:47.626 "w_mbytes_per_sec": 0 00:09:47.626 }, 00:09:47.626 "claimed": false, 00:09:47.626 "zoned": false, 00:09:47.626 "supported_io_types": { 00:09:47.626 "read": true, 00:09:47.626 "write": true, 00:09:47.626 "unmap": true, 00:09:47.626 "flush": true, 00:09:47.626 "reset": true, 00:09:47.626 "nvme_admin": false, 00:09:47.626 "nvme_io": false, 00:09:47.626 "nvme_io_md": false, 00:09:47.626 "write_zeroes": true, 00:09:47.626 "zcopy": true, 00:09:47.626 "get_zone_info": false, 00:09:47.626 "zone_management": false, 00:09:47.626 "zone_append": false, 00:09:47.626 "compare": false, 00:09:47.626 "compare_and_write": false, 00:09:47.626 "abort": true, 00:09:47.626 "seek_hole": false, 00:09:47.626 "seek_data": false, 00:09:47.626 "copy": true, 00:09:47.626 "nvme_iov_md": false 00:09:47.626 }, 00:09:47.626 "memory_domains": [ 00:09:47.626 { 00:09:47.626 "dma_device_id": "system", 00:09:47.626 "dma_device_type": 1 00:09:47.626 }, 00:09:47.626 { 00:09:47.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.626 "dma_device_type": 2 00:09:47.626 } 00:09:47.626 ], 00:09:47.626 "driver_specific": {} 00:09:47.626 } 00:09:47.626 ] 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.626 BaseBdev4 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.626 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.626 [ 00:09:47.626 { 00:09:47.626 "name": "BaseBdev4", 00:09:47.626 "aliases": [ 00:09:47.626 "712ea5dd-8de3-49e1-bfe9-51dd918e876f" 00:09:47.626 ], 00:09:47.626 "product_name": "Malloc disk", 00:09:47.626 "block_size": 512, 00:09:47.626 "num_blocks": 65536, 00:09:47.626 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:47.626 "assigned_rate_limits": { 00:09:47.626 "rw_ios_per_sec": 0, 00:09:47.626 "rw_mbytes_per_sec": 0, 00:09:47.626 "r_mbytes_per_sec": 0, 00:09:47.626 "w_mbytes_per_sec": 0 00:09:47.626 }, 00:09:47.626 "claimed": false, 00:09:47.626 "zoned": false, 00:09:47.626 "supported_io_types": { 00:09:47.626 "read": true, 00:09:47.626 "write": true, 00:09:47.626 "unmap": true, 00:09:47.626 "flush": true, 00:09:47.626 "reset": true, 00:09:47.626 "nvme_admin": false, 00:09:47.626 "nvme_io": false, 00:09:47.626 "nvme_io_md": false, 00:09:47.626 "write_zeroes": true, 00:09:47.626 "zcopy": true, 00:09:47.626 "get_zone_info": false, 00:09:47.626 "zone_management": false, 00:09:47.626 "zone_append": false, 00:09:47.626 "compare": false, 00:09:47.626 "compare_and_write": false, 00:09:47.626 "abort": true, 00:09:47.626 "seek_hole": false, 00:09:47.626 "seek_data": false, 00:09:47.626 "copy": true, 00:09:47.626 "nvme_iov_md": false 00:09:47.626 }, 00:09:47.627 "memory_domains": [ 00:09:47.627 { 00:09:47.627 "dma_device_id": "system", 00:09:47.627 "dma_device_type": 1 00:09:47.627 }, 00:09:47.627 { 00:09:47.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.627 "dma_device_type": 2 00:09:47.627 } 00:09:47.627 ], 00:09:47.627 "driver_specific": {} 00:09:47.627 } 00:09:47.627 ] 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.627 [2024-12-07 01:53:52.903946] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.627 [2024-12-07 01:53:52.904045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.627 [2024-12-07 01:53:52.904088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.627 [2024-12-07 01:53:52.905902] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.627 [2024-12-07 01:53:52.905998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.627 "name": "Existed_Raid", 00:09:47.627 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:47.627 "strip_size_kb": 64, 00:09:47.627 "state": "configuring", 00:09:47.627 "raid_level": "concat", 00:09:47.627 "superblock": true, 00:09:47.627 "num_base_bdevs": 4, 00:09:47.627 "num_base_bdevs_discovered": 3, 00:09:47.627 "num_base_bdevs_operational": 4, 00:09:47.627 "base_bdevs_list": [ 00:09:47.627 { 00:09:47.627 "name": "BaseBdev1", 00:09:47.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.627 "is_configured": false, 00:09:47.627 "data_offset": 0, 00:09:47.627 "data_size": 0 00:09:47.627 }, 00:09:47.627 { 00:09:47.627 "name": "BaseBdev2", 00:09:47.627 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:47.627 "is_configured": true, 00:09:47.627 "data_offset": 2048, 00:09:47.627 "data_size": 63488 00:09:47.627 }, 00:09:47.627 { 00:09:47.627 "name": "BaseBdev3", 00:09:47.627 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:47.627 "is_configured": true, 00:09:47.627 "data_offset": 2048, 00:09:47.627 "data_size": 63488 00:09:47.627 }, 00:09:47.627 { 00:09:47.627 "name": "BaseBdev4", 00:09:47.627 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:47.627 "is_configured": true, 00:09:47.627 "data_offset": 2048, 00:09:47.627 "data_size": 63488 00:09:47.627 } 00:09:47.627 ] 00:09:47.627 }' 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.627 01:53:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.192 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.192 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.192 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.192 [2024-12-07 01:53:53.383142] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.193 "name": "Existed_Raid", 00:09:48.193 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:48.193 "strip_size_kb": 64, 00:09:48.193 "state": "configuring", 00:09:48.193 "raid_level": "concat", 00:09:48.193 "superblock": true, 00:09:48.193 "num_base_bdevs": 4, 00:09:48.193 "num_base_bdevs_discovered": 2, 00:09:48.193 "num_base_bdevs_operational": 4, 00:09:48.193 "base_bdevs_list": [ 00:09:48.193 { 00:09:48.193 "name": "BaseBdev1", 00:09:48.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.193 "is_configured": false, 00:09:48.193 "data_offset": 0, 00:09:48.193 "data_size": 0 00:09:48.193 }, 00:09:48.193 { 00:09:48.193 "name": null, 00:09:48.193 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:48.193 "is_configured": false, 00:09:48.193 "data_offset": 0, 00:09:48.193 "data_size": 63488 00:09:48.193 }, 00:09:48.193 { 00:09:48.193 "name": "BaseBdev3", 00:09:48.193 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:48.193 "is_configured": true, 00:09:48.193 "data_offset": 2048, 00:09:48.193 "data_size": 63488 00:09:48.193 }, 00:09:48.193 { 00:09:48.193 "name": "BaseBdev4", 00:09:48.193 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:48.193 "is_configured": true, 00:09:48.193 "data_offset": 2048, 00:09:48.193 "data_size": 63488 00:09:48.193 } 00:09:48.193 ] 00:09:48.193 }' 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.193 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 [2024-12-07 01:53:53.825126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.454 BaseBdev1 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 [ 00:09:48.454 { 00:09:48.454 "name": "BaseBdev1", 00:09:48.454 "aliases": [ 00:09:48.454 "588034ca-f52c-4c2c-aff8-deff068a9c17" 00:09:48.454 ], 00:09:48.454 "product_name": "Malloc disk", 00:09:48.454 "block_size": 512, 00:09:48.454 "num_blocks": 65536, 00:09:48.454 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:48.454 "assigned_rate_limits": { 00:09:48.454 "rw_ios_per_sec": 0, 00:09:48.454 "rw_mbytes_per_sec": 0, 00:09:48.454 "r_mbytes_per_sec": 0, 00:09:48.454 "w_mbytes_per_sec": 0 00:09:48.454 }, 00:09:48.454 "claimed": true, 00:09:48.454 "claim_type": "exclusive_write", 00:09:48.454 "zoned": false, 00:09:48.454 "supported_io_types": { 00:09:48.454 "read": true, 00:09:48.454 "write": true, 00:09:48.454 "unmap": true, 00:09:48.454 "flush": true, 00:09:48.454 "reset": true, 00:09:48.454 "nvme_admin": false, 00:09:48.454 "nvme_io": false, 00:09:48.454 "nvme_io_md": false, 00:09:48.454 "write_zeroes": true, 00:09:48.454 "zcopy": true, 00:09:48.454 "get_zone_info": false, 00:09:48.454 "zone_management": false, 00:09:48.454 "zone_append": false, 00:09:48.454 "compare": false, 00:09:48.454 "compare_and_write": false, 00:09:48.454 "abort": true, 00:09:48.454 "seek_hole": false, 00:09:48.454 "seek_data": false, 00:09:48.454 "copy": true, 00:09:48.454 "nvme_iov_md": false 00:09:48.454 }, 00:09:48.454 "memory_domains": [ 00:09:48.454 { 00:09:48.454 "dma_device_id": "system", 00:09:48.454 "dma_device_type": 1 00:09:48.454 }, 00:09:48.454 { 00:09:48.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.454 "dma_device_type": 2 00:09:48.454 } 00:09:48.454 ], 00:09:48.454 "driver_specific": {} 00:09:48.454 } 00:09:48.454 ] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.454 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.713 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.713 "name": "Existed_Raid", 00:09:48.714 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:48.714 "strip_size_kb": 64, 00:09:48.714 "state": "configuring", 00:09:48.714 "raid_level": "concat", 00:09:48.714 "superblock": true, 00:09:48.714 "num_base_bdevs": 4, 00:09:48.714 "num_base_bdevs_discovered": 3, 00:09:48.714 "num_base_bdevs_operational": 4, 00:09:48.714 "base_bdevs_list": [ 00:09:48.714 { 00:09:48.714 "name": "BaseBdev1", 00:09:48.714 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:48.714 "is_configured": true, 00:09:48.714 "data_offset": 2048, 00:09:48.714 "data_size": 63488 00:09:48.714 }, 00:09:48.714 { 00:09:48.714 "name": null, 00:09:48.714 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:48.714 "is_configured": false, 00:09:48.714 "data_offset": 0, 00:09:48.714 "data_size": 63488 00:09:48.714 }, 00:09:48.714 { 00:09:48.714 "name": "BaseBdev3", 00:09:48.714 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:48.714 "is_configured": true, 00:09:48.714 "data_offset": 2048, 00:09:48.714 "data_size": 63488 00:09:48.714 }, 00:09:48.714 { 00:09:48.714 "name": "BaseBdev4", 00:09:48.714 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:48.714 "is_configured": true, 00:09:48.714 "data_offset": 2048, 00:09:48.714 "data_size": 63488 00:09:48.714 } 00:09:48.714 ] 00:09:48.714 }' 00:09:48.714 01:53:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.714 01:53:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 [2024-12-07 01:53:54.296386] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.973 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.973 "name": "Existed_Raid", 00:09:48.973 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:48.973 "strip_size_kb": 64, 00:09:48.973 "state": "configuring", 00:09:48.973 "raid_level": "concat", 00:09:48.973 "superblock": true, 00:09:48.973 "num_base_bdevs": 4, 00:09:48.973 "num_base_bdevs_discovered": 2, 00:09:48.973 "num_base_bdevs_operational": 4, 00:09:48.973 "base_bdevs_list": [ 00:09:48.973 { 00:09:48.973 "name": "BaseBdev1", 00:09:48.973 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:48.973 "is_configured": true, 00:09:48.973 "data_offset": 2048, 00:09:48.973 "data_size": 63488 00:09:48.973 }, 00:09:48.973 { 00:09:48.973 "name": null, 00:09:48.973 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:48.973 "is_configured": false, 00:09:48.973 "data_offset": 0, 00:09:48.973 "data_size": 63488 00:09:48.973 }, 00:09:48.973 { 00:09:48.973 "name": null, 00:09:48.974 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:48.974 "is_configured": false, 00:09:48.974 "data_offset": 0, 00:09:48.974 "data_size": 63488 00:09:48.974 }, 00:09:48.974 { 00:09:48.974 "name": "BaseBdev4", 00:09:48.974 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:48.974 "is_configured": true, 00:09:48.974 "data_offset": 2048, 00:09:48.974 "data_size": 63488 00:09:48.974 } 00:09:48.974 ] 00:09:48.974 }' 00:09:48.974 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.974 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.555 [2024-12-07 01:53:54.775647] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.555 "name": "Existed_Raid", 00:09:49.555 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:49.555 "strip_size_kb": 64, 00:09:49.555 "state": "configuring", 00:09:49.555 "raid_level": "concat", 00:09:49.555 "superblock": true, 00:09:49.555 "num_base_bdevs": 4, 00:09:49.555 "num_base_bdevs_discovered": 3, 00:09:49.555 "num_base_bdevs_operational": 4, 00:09:49.555 "base_bdevs_list": [ 00:09:49.555 { 00:09:49.555 "name": "BaseBdev1", 00:09:49.555 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:49.555 "is_configured": true, 00:09:49.555 "data_offset": 2048, 00:09:49.555 "data_size": 63488 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "name": null, 00:09:49.555 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:49.555 "is_configured": false, 00:09:49.555 "data_offset": 0, 00:09:49.555 "data_size": 63488 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "name": "BaseBdev3", 00:09:49.555 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:49.555 "is_configured": true, 00:09:49.555 "data_offset": 2048, 00:09:49.555 "data_size": 63488 00:09:49.555 }, 00:09:49.555 { 00:09:49.555 "name": "BaseBdev4", 00:09:49.555 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:49.555 "is_configured": true, 00:09:49.555 "data_offset": 2048, 00:09:49.555 "data_size": 63488 00:09:49.555 } 00:09:49.555 ] 00:09:49.555 }' 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.555 01:53:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.817 [2024-12-07 01:53:55.246819] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.817 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.076 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.076 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.076 "name": "Existed_Raid", 00:09:50.076 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:50.076 "strip_size_kb": 64, 00:09:50.076 "state": "configuring", 00:09:50.076 "raid_level": "concat", 00:09:50.076 "superblock": true, 00:09:50.076 "num_base_bdevs": 4, 00:09:50.076 "num_base_bdevs_discovered": 2, 00:09:50.076 "num_base_bdevs_operational": 4, 00:09:50.076 "base_bdevs_list": [ 00:09:50.076 { 00:09:50.076 "name": null, 00:09:50.076 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:50.076 "is_configured": false, 00:09:50.076 "data_offset": 0, 00:09:50.076 "data_size": 63488 00:09:50.076 }, 00:09:50.076 { 00:09:50.076 "name": null, 00:09:50.076 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:50.076 "is_configured": false, 00:09:50.076 "data_offset": 0, 00:09:50.076 "data_size": 63488 00:09:50.076 }, 00:09:50.076 { 00:09:50.076 "name": "BaseBdev3", 00:09:50.076 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:50.076 "is_configured": true, 00:09:50.076 "data_offset": 2048, 00:09:50.076 "data_size": 63488 00:09:50.076 }, 00:09:50.076 { 00:09:50.076 "name": "BaseBdev4", 00:09:50.076 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:50.076 "is_configured": true, 00:09:50.076 "data_offset": 2048, 00:09:50.076 "data_size": 63488 00:09:50.076 } 00:09:50.076 ] 00:09:50.076 }' 00:09:50.076 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.076 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.335 [2024-12-07 01:53:55.668724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.335 "name": "Existed_Raid", 00:09:50.335 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:50.335 "strip_size_kb": 64, 00:09:50.335 "state": "configuring", 00:09:50.335 "raid_level": "concat", 00:09:50.335 "superblock": true, 00:09:50.335 "num_base_bdevs": 4, 00:09:50.335 "num_base_bdevs_discovered": 3, 00:09:50.335 "num_base_bdevs_operational": 4, 00:09:50.335 "base_bdevs_list": [ 00:09:50.335 { 00:09:50.335 "name": null, 00:09:50.335 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:50.335 "is_configured": false, 00:09:50.335 "data_offset": 0, 00:09:50.335 "data_size": 63488 00:09:50.335 }, 00:09:50.335 { 00:09:50.335 "name": "BaseBdev2", 00:09:50.335 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:50.335 "is_configured": true, 00:09:50.335 "data_offset": 2048, 00:09:50.335 "data_size": 63488 00:09:50.335 }, 00:09:50.335 { 00:09:50.335 "name": "BaseBdev3", 00:09:50.335 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:50.335 "is_configured": true, 00:09:50.335 "data_offset": 2048, 00:09:50.335 "data_size": 63488 00:09:50.335 }, 00:09:50.335 { 00:09:50.335 "name": "BaseBdev4", 00:09:50.335 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:50.335 "is_configured": true, 00:09:50.335 "data_offset": 2048, 00:09:50.335 "data_size": 63488 00:09:50.335 } 00:09:50.335 ] 00:09:50.335 }' 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.335 01:53:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 588034ca-f52c-4c2c-aff8-deff068a9c17 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.902 [2024-12-07 01:53:56.190540] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:50.902 [2024-12-07 01:53:56.190816] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:50.902 [2024-12-07 01:53:56.190852] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.902 [2024-12-07 01:53:56.191154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:50.902 NewBaseBdev 00:09:50.902 [2024-12-07 01:53:56.191313] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:50.902 [2024-12-07 01:53:56.191359] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:50.902 [2024-12-07 01:53:56.191492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.902 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.902 [ 00:09:50.902 { 00:09:50.902 "name": "NewBaseBdev", 00:09:50.902 "aliases": [ 00:09:50.902 "588034ca-f52c-4c2c-aff8-deff068a9c17" 00:09:50.902 ], 00:09:50.902 "product_name": "Malloc disk", 00:09:50.902 "block_size": 512, 00:09:50.902 "num_blocks": 65536, 00:09:50.902 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:50.902 "assigned_rate_limits": { 00:09:50.902 "rw_ios_per_sec": 0, 00:09:50.902 "rw_mbytes_per_sec": 0, 00:09:50.902 "r_mbytes_per_sec": 0, 00:09:50.902 "w_mbytes_per_sec": 0 00:09:50.902 }, 00:09:50.902 "claimed": true, 00:09:50.902 "claim_type": "exclusive_write", 00:09:50.902 "zoned": false, 00:09:50.902 "supported_io_types": { 00:09:50.902 "read": true, 00:09:50.902 "write": true, 00:09:50.902 "unmap": true, 00:09:50.902 "flush": true, 00:09:50.902 "reset": true, 00:09:50.902 "nvme_admin": false, 00:09:50.902 "nvme_io": false, 00:09:50.902 "nvme_io_md": false, 00:09:50.902 "write_zeroes": true, 00:09:50.902 "zcopy": true, 00:09:50.902 "get_zone_info": false, 00:09:50.902 "zone_management": false, 00:09:50.902 "zone_append": false, 00:09:50.902 "compare": false, 00:09:50.902 "compare_and_write": false, 00:09:50.902 "abort": true, 00:09:50.902 "seek_hole": false, 00:09:50.902 "seek_data": false, 00:09:50.902 "copy": true, 00:09:50.902 "nvme_iov_md": false 00:09:50.902 }, 00:09:50.902 "memory_domains": [ 00:09:50.902 { 00:09:50.903 "dma_device_id": "system", 00:09:50.903 "dma_device_type": 1 00:09:50.903 }, 00:09:50.903 { 00:09:50.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.903 "dma_device_type": 2 00:09:50.903 } 00:09:50.903 ], 00:09:50.903 "driver_specific": {} 00:09:50.903 } 00:09:50.903 ] 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.903 "name": "Existed_Raid", 00:09:50.903 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:50.903 "strip_size_kb": 64, 00:09:50.903 "state": "online", 00:09:50.903 "raid_level": "concat", 00:09:50.903 "superblock": true, 00:09:50.903 "num_base_bdevs": 4, 00:09:50.903 "num_base_bdevs_discovered": 4, 00:09:50.903 "num_base_bdevs_operational": 4, 00:09:50.903 "base_bdevs_list": [ 00:09:50.903 { 00:09:50.903 "name": "NewBaseBdev", 00:09:50.903 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:50.903 "is_configured": true, 00:09:50.903 "data_offset": 2048, 00:09:50.903 "data_size": 63488 00:09:50.903 }, 00:09:50.903 { 00:09:50.903 "name": "BaseBdev2", 00:09:50.903 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:50.903 "is_configured": true, 00:09:50.903 "data_offset": 2048, 00:09:50.903 "data_size": 63488 00:09:50.903 }, 00:09:50.903 { 00:09:50.903 "name": "BaseBdev3", 00:09:50.903 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:50.903 "is_configured": true, 00:09:50.903 "data_offset": 2048, 00:09:50.903 "data_size": 63488 00:09:50.903 }, 00:09:50.903 { 00:09:50.903 "name": "BaseBdev4", 00:09:50.903 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:50.903 "is_configured": true, 00:09:50.903 "data_offset": 2048, 00:09:50.903 "data_size": 63488 00:09:50.903 } 00:09:50.903 ] 00:09:50.903 }' 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.903 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.472 [2024-12-07 01:53:56.666075] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.472 "name": "Existed_Raid", 00:09:51.472 "aliases": [ 00:09:51.472 "d44ada1a-08e6-417f-85a9-9da3c4eb9d19" 00:09:51.472 ], 00:09:51.472 "product_name": "Raid Volume", 00:09:51.472 "block_size": 512, 00:09:51.472 "num_blocks": 253952, 00:09:51.472 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:51.472 "assigned_rate_limits": { 00:09:51.472 "rw_ios_per_sec": 0, 00:09:51.472 "rw_mbytes_per_sec": 0, 00:09:51.472 "r_mbytes_per_sec": 0, 00:09:51.472 "w_mbytes_per_sec": 0 00:09:51.472 }, 00:09:51.472 "claimed": false, 00:09:51.472 "zoned": false, 00:09:51.472 "supported_io_types": { 00:09:51.472 "read": true, 00:09:51.472 "write": true, 00:09:51.472 "unmap": true, 00:09:51.472 "flush": true, 00:09:51.472 "reset": true, 00:09:51.472 "nvme_admin": false, 00:09:51.472 "nvme_io": false, 00:09:51.472 "nvme_io_md": false, 00:09:51.472 "write_zeroes": true, 00:09:51.472 "zcopy": false, 00:09:51.472 "get_zone_info": false, 00:09:51.472 "zone_management": false, 00:09:51.472 "zone_append": false, 00:09:51.472 "compare": false, 00:09:51.472 "compare_and_write": false, 00:09:51.472 "abort": false, 00:09:51.472 "seek_hole": false, 00:09:51.472 "seek_data": false, 00:09:51.472 "copy": false, 00:09:51.472 "nvme_iov_md": false 00:09:51.472 }, 00:09:51.472 "memory_domains": [ 00:09:51.472 { 00:09:51.472 "dma_device_id": "system", 00:09:51.472 "dma_device_type": 1 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.472 "dma_device_type": 2 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "system", 00:09:51.472 "dma_device_type": 1 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.472 "dma_device_type": 2 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "system", 00:09:51.472 "dma_device_type": 1 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.472 "dma_device_type": 2 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "system", 00:09:51.472 "dma_device_type": 1 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.472 "dma_device_type": 2 00:09:51.472 } 00:09:51.472 ], 00:09:51.472 "driver_specific": { 00:09:51.472 "raid": { 00:09:51.472 "uuid": "d44ada1a-08e6-417f-85a9-9da3c4eb9d19", 00:09:51.472 "strip_size_kb": 64, 00:09:51.472 "state": "online", 00:09:51.472 "raid_level": "concat", 00:09:51.472 "superblock": true, 00:09:51.472 "num_base_bdevs": 4, 00:09:51.472 "num_base_bdevs_discovered": 4, 00:09:51.472 "num_base_bdevs_operational": 4, 00:09:51.472 "base_bdevs_list": [ 00:09:51.472 { 00:09:51.472 "name": "NewBaseBdev", 00:09:51.472 "uuid": "588034ca-f52c-4c2c-aff8-deff068a9c17", 00:09:51.472 "is_configured": true, 00:09:51.472 "data_offset": 2048, 00:09:51.472 "data_size": 63488 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "name": "BaseBdev2", 00:09:51.472 "uuid": "35f40e29-8de4-4680-a1f6-f7180a5e82f5", 00:09:51.472 "is_configured": true, 00:09:51.472 "data_offset": 2048, 00:09:51.472 "data_size": 63488 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "name": "BaseBdev3", 00:09:51.472 "uuid": "33584fc0-73a0-4e09-b9c6-0ffe34ed6152", 00:09:51.472 "is_configured": true, 00:09:51.472 "data_offset": 2048, 00:09:51.472 "data_size": 63488 00:09:51.472 }, 00:09:51.472 { 00:09:51.472 "name": "BaseBdev4", 00:09:51.472 "uuid": "712ea5dd-8de3-49e1-bfe9-51dd918e876f", 00:09:51.472 "is_configured": true, 00:09:51.472 "data_offset": 2048, 00:09:51.472 "data_size": 63488 00:09:51.472 } 00:09:51.472 ] 00:09:51.472 } 00:09:51.472 } 00:09:51.472 }' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:51.472 BaseBdev2 00:09:51.472 BaseBdev3 00:09:51.472 BaseBdev4' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.472 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.732 [2024-12-07 01:53:56.973223] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:51.732 [2024-12-07 01:53:56.973289] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.732 [2024-12-07 01:53:56.973381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.732 [2024-12-07 01:53:56.973479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.732 [2024-12-07 01:53:56.973524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82539 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82539 ']' 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 82539 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:51.732 01:53:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82539 00:09:51.732 killing process with pid 82539 00:09:51.732 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:51.732 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:51.732 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82539' 00:09:51.732 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 82539 00:09:51.732 [2024-12-07 01:53:57.008029] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.732 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 82539 00:09:51.732 [2024-12-07 01:53:57.048613] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.992 01:53:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:51.992 00:09:51.992 real 0m9.282s 00:09:51.992 user 0m15.894s 00:09:51.992 sys 0m1.905s 00:09:51.992 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:51.992 ************************************ 00:09:51.992 END TEST raid_state_function_test_sb 00:09:51.992 ************************************ 00:09:51.992 01:53:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.992 01:53:57 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:51.992 01:53:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:51.992 01:53:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:51.992 01:53:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.992 ************************************ 00:09:51.992 START TEST raid_superblock_test 00:09:51.992 ************************************ 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83193 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:51.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83193 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 83193 ']' 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.992 01:53:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.992 [2024-12-07 01:53:57.442325] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:51.992 [2024-12-07 01:53:57.442535] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83193 ] 00:09:52.252 [2024-12-07 01:53:57.586896] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.252 [2024-12-07 01:53:57.632187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.252 [2024-12-07 01:53:57.673257] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.252 [2024-12-07 01:53:57.673376] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.821 malloc1 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:52.821 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 [2024-12-07 01:53:58.286921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:53.083 [2024-12-07 01:53:58.287026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.083 [2024-12-07 01:53:58.287058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:53.083 [2024-12-07 01:53:58.287100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.083 [2024-12-07 01:53:58.289324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.083 [2024-12-07 01:53:58.289361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:53.083 pt1 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 malloc2 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 [2024-12-07 01:53:58.323079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.083 [2024-12-07 01:53:58.323190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.083 [2024-12-07 01:53:58.323230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:53.083 [2024-12-07 01:53:58.323269] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.083 [2024-12-07 01:53:58.325823] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.083 [2024-12-07 01:53:58.325902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.083 pt2 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 malloc3 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 [2024-12-07 01:53:58.355471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:53.083 [2024-12-07 01:53:58.355580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.083 [2024-12-07 01:53:58.355615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:53.083 [2024-12-07 01:53:58.355646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.083 [2024-12-07 01:53:58.357826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.083 [2024-12-07 01:53:58.357909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:53.083 pt3 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 malloc4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 [2024-12-07 01:53:58.388087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:53.083 [2024-12-07 01:53:58.388200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.083 [2024-12-07 01:53:58.388235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:53.083 [2024-12-07 01:53:58.388267] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.083 [2024-12-07 01:53:58.390479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.083 [2024-12-07 01:53:58.390548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:53.083 pt4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.083 [2024-12-07 01:53:58.400111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:53.083 [2024-12-07 01:53:58.401956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.083 [2024-12-07 01:53:58.402073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:53.083 [2024-12-07 01:53:58.402120] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:53.083 [2024-12-07 01:53:58.402267] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:53.083 [2024-12-07 01:53:58.402280] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:53.083 [2024-12-07 01:53:58.402530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:53.083 [2024-12-07 01:53:58.402680] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:53.083 [2024-12-07 01:53:58.402708] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:53.083 [2024-12-07 01:53:58.402830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.083 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.084 "name": "raid_bdev1", 00:09:53.084 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:53.084 "strip_size_kb": 64, 00:09:53.084 "state": "online", 00:09:53.084 "raid_level": "concat", 00:09:53.084 "superblock": true, 00:09:53.084 "num_base_bdevs": 4, 00:09:53.084 "num_base_bdevs_discovered": 4, 00:09:53.084 "num_base_bdevs_operational": 4, 00:09:53.084 "base_bdevs_list": [ 00:09:53.084 { 00:09:53.084 "name": "pt1", 00:09:53.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.084 "is_configured": true, 00:09:53.084 "data_offset": 2048, 00:09:53.084 "data_size": 63488 00:09:53.084 }, 00:09:53.084 { 00:09:53.084 "name": "pt2", 00:09:53.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.084 "is_configured": true, 00:09:53.084 "data_offset": 2048, 00:09:53.084 "data_size": 63488 00:09:53.084 }, 00:09:53.084 { 00:09:53.084 "name": "pt3", 00:09:53.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.084 "is_configured": true, 00:09:53.084 "data_offset": 2048, 00:09:53.084 "data_size": 63488 00:09:53.084 }, 00:09:53.084 { 00:09:53.084 "name": "pt4", 00:09:53.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:53.084 "is_configured": true, 00:09:53.084 "data_offset": 2048, 00:09:53.084 "data_size": 63488 00:09:53.084 } 00:09:53.084 ] 00:09:53.084 }' 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.084 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.655 [2024-12-07 01:53:58.843696] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:53.655 "name": "raid_bdev1", 00:09:53.655 "aliases": [ 00:09:53.655 "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75" 00:09:53.655 ], 00:09:53.655 "product_name": "Raid Volume", 00:09:53.655 "block_size": 512, 00:09:53.655 "num_blocks": 253952, 00:09:53.655 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:53.655 "assigned_rate_limits": { 00:09:53.655 "rw_ios_per_sec": 0, 00:09:53.655 "rw_mbytes_per_sec": 0, 00:09:53.655 "r_mbytes_per_sec": 0, 00:09:53.655 "w_mbytes_per_sec": 0 00:09:53.655 }, 00:09:53.655 "claimed": false, 00:09:53.655 "zoned": false, 00:09:53.655 "supported_io_types": { 00:09:53.655 "read": true, 00:09:53.655 "write": true, 00:09:53.655 "unmap": true, 00:09:53.655 "flush": true, 00:09:53.655 "reset": true, 00:09:53.655 "nvme_admin": false, 00:09:53.655 "nvme_io": false, 00:09:53.655 "nvme_io_md": false, 00:09:53.655 "write_zeroes": true, 00:09:53.655 "zcopy": false, 00:09:53.655 "get_zone_info": false, 00:09:53.655 "zone_management": false, 00:09:53.655 "zone_append": false, 00:09:53.655 "compare": false, 00:09:53.655 "compare_and_write": false, 00:09:53.655 "abort": false, 00:09:53.655 "seek_hole": false, 00:09:53.655 "seek_data": false, 00:09:53.655 "copy": false, 00:09:53.655 "nvme_iov_md": false 00:09:53.655 }, 00:09:53.655 "memory_domains": [ 00:09:53.655 { 00:09:53.655 "dma_device_id": "system", 00:09:53.655 "dma_device_type": 1 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.655 "dma_device_type": 2 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "system", 00:09:53.655 "dma_device_type": 1 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.655 "dma_device_type": 2 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "system", 00:09:53.655 "dma_device_type": 1 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.655 "dma_device_type": 2 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "system", 00:09:53.655 "dma_device_type": 1 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.655 "dma_device_type": 2 00:09:53.655 } 00:09:53.655 ], 00:09:53.655 "driver_specific": { 00:09:53.655 "raid": { 00:09:53.655 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:53.655 "strip_size_kb": 64, 00:09:53.655 "state": "online", 00:09:53.655 "raid_level": "concat", 00:09:53.655 "superblock": true, 00:09:53.655 "num_base_bdevs": 4, 00:09:53.655 "num_base_bdevs_discovered": 4, 00:09:53.655 "num_base_bdevs_operational": 4, 00:09:53.655 "base_bdevs_list": [ 00:09:53.655 { 00:09:53.655 "name": "pt1", 00:09:53.655 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.655 "is_configured": true, 00:09:53.655 "data_offset": 2048, 00:09:53.655 "data_size": 63488 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "name": "pt2", 00:09:53.655 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.655 "is_configured": true, 00:09:53.655 "data_offset": 2048, 00:09:53.655 "data_size": 63488 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "name": "pt3", 00:09:53.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.655 "is_configured": true, 00:09:53.655 "data_offset": 2048, 00:09:53.655 "data_size": 63488 00:09:53.655 }, 00:09:53.655 { 00:09:53.655 "name": "pt4", 00:09:53.655 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:53.655 "is_configured": true, 00:09:53.655 "data_offset": 2048, 00:09:53.655 "data_size": 63488 00:09:53.655 } 00:09:53.655 ] 00:09:53.655 } 00:09:53.655 } 00:09:53.655 }' 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:53.655 pt2 00:09:53.655 pt3 00:09:53.655 pt4' 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.655 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:53.656 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.656 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.656 01:53:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.656 01:53:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.656 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 [2024-12-07 01:53:59.163186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75 ']' 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 [2024-12-07 01:53:59.210802] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.916 [2024-12-07 01:53:59.210880] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.916 [2024-12-07 01:53:59.210997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.916 [2024-12-07 01:53:59.211117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.916 [2024-12-07 01:53:59.211180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:53.916 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.917 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.177 [2024-12-07 01:53:59.378548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:54.177 [2024-12-07 01:53:59.380640] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:54.177 [2024-12-07 01:53:59.380743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:54.177 [2024-12-07 01:53:59.380802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:54.177 [2024-12-07 01:53:59.380905] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:54.177 [2024-12-07 01:53:59.381004] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:54.177 [2024-12-07 01:53:59.381080] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:54.177 [2024-12-07 01:53:59.381148] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:54.177 [2024-12-07 01:53:59.381210] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.177 [2024-12-07 01:53:59.381242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:54.177 request: 00:09:54.177 { 00:09:54.177 "name": "raid_bdev1", 00:09:54.177 "raid_level": "concat", 00:09:54.177 "base_bdevs": [ 00:09:54.177 "malloc1", 00:09:54.177 "malloc2", 00:09:54.177 "malloc3", 00:09:54.177 "malloc4" 00:09:54.177 ], 00:09:54.177 "strip_size_kb": 64, 00:09:54.177 "superblock": false, 00:09:54.177 "method": "bdev_raid_create", 00:09:54.177 "req_id": 1 00:09:54.177 } 00:09:54.177 Got JSON-RPC error response 00:09:54.177 response: 00:09:54.177 { 00:09:54.177 "code": -17, 00:09:54.177 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:54.177 } 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.177 [2024-12-07 01:53:59.446376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:54.177 [2024-12-07 01:53:59.446460] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.177 [2024-12-07 01:53:59.446498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:54.177 [2024-12-07 01:53:59.446524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.177 [2024-12-07 01:53:59.448766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.177 [2024-12-07 01:53:59.448830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:54.177 [2024-12-07 01:53:59.448926] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:54.177 [2024-12-07 01:53:59.449018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:54.177 pt1 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.177 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.177 "name": "raid_bdev1", 00:09:54.177 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:54.177 "strip_size_kb": 64, 00:09:54.177 "state": "configuring", 00:09:54.177 "raid_level": "concat", 00:09:54.177 "superblock": true, 00:09:54.177 "num_base_bdevs": 4, 00:09:54.177 "num_base_bdevs_discovered": 1, 00:09:54.177 "num_base_bdevs_operational": 4, 00:09:54.177 "base_bdevs_list": [ 00:09:54.177 { 00:09:54.177 "name": "pt1", 00:09:54.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.178 "is_configured": true, 00:09:54.178 "data_offset": 2048, 00:09:54.178 "data_size": 63488 00:09:54.178 }, 00:09:54.178 { 00:09:54.178 "name": null, 00:09:54.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.178 "is_configured": false, 00:09:54.178 "data_offset": 2048, 00:09:54.178 "data_size": 63488 00:09:54.178 }, 00:09:54.178 { 00:09:54.178 "name": null, 00:09:54.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.178 "is_configured": false, 00:09:54.178 "data_offset": 2048, 00:09:54.178 "data_size": 63488 00:09:54.178 }, 00:09:54.178 { 00:09:54.178 "name": null, 00:09:54.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:54.178 "is_configured": false, 00:09:54.178 "data_offset": 2048, 00:09:54.178 "data_size": 63488 00:09:54.178 } 00:09:54.178 ] 00:09:54.178 }' 00:09:54.178 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.178 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 [2024-12-07 01:53:59.865719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.438 [2024-12-07 01:53:59.865834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.438 [2024-12-07 01:53:59.865872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:54.438 [2024-12-07 01:53:59.865921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.438 [2024-12-07 01:53:59.866356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.438 [2024-12-07 01:53:59.866409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.438 [2024-12-07 01:53:59.866518] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.438 [2024-12-07 01:53:59.866579] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.438 pt2 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 [2024-12-07 01:53:59.877752] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.698 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.698 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.698 "name": "raid_bdev1", 00:09:54.698 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:54.698 "strip_size_kb": 64, 00:09:54.698 "state": "configuring", 00:09:54.698 "raid_level": "concat", 00:09:54.698 "superblock": true, 00:09:54.698 "num_base_bdevs": 4, 00:09:54.698 "num_base_bdevs_discovered": 1, 00:09:54.698 "num_base_bdevs_operational": 4, 00:09:54.698 "base_bdevs_list": [ 00:09:54.698 { 00:09:54.698 "name": "pt1", 00:09:54.698 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.698 "is_configured": true, 00:09:54.698 "data_offset": 2048, 00:09:54.698 "data_size": 63488 00:09:54.698 }, 00:09:54.698 { 00:09:54.698 "name": null, 00:09:54.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.698 "is_configured": false, 00:09:54.698 "data_offset": 0, 00:09:54.698 "data_size": 63488 00:09:54.698 }, 00:09:54.698 { 00:09:54.698 "name": null, 00:09:54.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.698 "is_configured": false, 00:09:54.698 "data_offset": 2048, 00:09:54.698 "data_size": 63488 00:09:54.698 }, 00:09:54.698 { 00:09:54.698 "name": null, 00:09:54.698 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:54.698 "is_configured": false, 00:09:54.698 "data_offset": 2048, 00:09:54.698 "data_size": 63488 00:09:54.698 } 00:09:54.698 ] 00:09:54.698 }' 00:09:54.698 01:53:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.698 01:53:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.959 [2024-12-07 01:54:00.328919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:54.959 [2024-12-07 01:54:00.329026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.959 [2024-12-07 01:54:00.329061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:54.959 [2024-12-07 01:54:00.329090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.959 [2024-12-07 01:54:00.329522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.959 [2024-12-07 01:54:00.329579] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:54.959 [2024-12-07 01:54:00.329691] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:54.959 [2024-12-07 01:54:00.329745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:54.959 pt2 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.959 [2024-12-07 01:54:00.340841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:54.959 [2024-12-07 01:54:00.340952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.959 [2024-12-07 01:54:00.340985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:54.959 [2024-12-07 01:54:00.341022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.959 [2024-12-07 01:54:00.341352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.959 [2024-12-07 01:54:00.341405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:54.959 [2024-12-07 01:54:00.341488] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:54.959 [2024-12-07 01:54:00.341535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:54.959 pt3 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.959 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.959 [2024-12-07 01:54:00.352835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:54.959 [2024-12-07 01:54:00.352926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:54.959 [2024-12-07 01:54:00.352954] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:54.959 [2024-12-07 01:54:00.352981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:54.959 [2024-12-07 01:54:00.353286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:54.959 [2024-12-07 01:54:00.353339] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:54.959 [2024-12-07 01:54:00.353410] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:54.959 [2024-12-07 01:54:00.353455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:54.959 [2024-12-07 01:54:00.353565] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:54.959 [2024-12-07 01:54:00.353602] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:54.960 [2024-12-07 01:54:00.353846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:54.960 [2024-12-07 01:54:00.353991] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:54.960 [2024-12-07 01:54:00.354028] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:54.960 [2024-12-07 01:54:00.354154] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.960 pt4 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.960 "name": "raid_bdev1", 00:09:54.960 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:54.960 "strip_size_kb": 64, 00:09:54.960 "state": "online", 00:09:54.960 "raid_level": "concat", 00:09:54.960 "superblock": true, 00:09:54.960 "num_base_bdevs": 4, 00:09:54.960 "num_base_bdevs_discovered": 4, 00:09:54.960 "num_base_bdevs_operational": 4, 00:09:54.960 "base_bdevs_list": [ 00:09:54.960 { 00:09:54.960 "name": "pt1", 00:09:54.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.960 "is_configured": true, 00:09:54.960 "data_offset": 2048, 00:09:54.960 "data_size": 63488 00:09:54.960 }, 00:09:54.960 { 00:09:54.960 "name": "pt2", 00:09:54.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.960 "is_configured": true, 00:09:54.960 "data_offset": 2048, 00:09:54.960 "data_size": 63488 00:09:54.960 }, 00:09:54.960 { 00:09:54.960 "name": "pt3", 00:09:54.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.960 "is_configured": true, 00:09:54.960 "data_offset": 2048, 00:09:54.960 "data_size": 63488 00:09:54.960 }, 00:09:54.960 { 00:09:54.960 "name": "pt4", 00:09:54.960 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:54.960 "is_configured": true, 00:09:54.960 "data_offset": 2048, 00:09:54.960 "data_size": 63488 00:09:54.960 } 00:09:54.960 ] 00:09:54.960 }' 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.960 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.530 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:55.530 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 [2024-12-07 01:54:00.856332] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.531 "name": "raid_bdev1", 00:09:55.531 "aliases": [ 00:09:55.531 "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75" 00:09:55.531 ], 00:09:55.531 "product_name": "Raid Volume", 00:09:55.531 "block_size": 512, 00:09:55.531 "num_blocks": 253952, 00:09:55.531 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:55.531 "assigned_rate_limits": { 00:09:55.531 "rw_ios_per_sec": 0, 00:09:55.531 "rw_mbytes_per_sec": 0, 00:09:55.531 "r_mbytes_per_sec": 0, 00:09:55.531 "w_mbytes_per_sec": 0 00:09:55.531 }, 00:09:55.531 "claimed": false, 00:09:55.531 "zoned": false, 00:09:55.531 "supported_io_types": { 00:09:55.531 "read": true, 00:09:55.531 "write": true, 00:09:55.531 "unmap": true, 00:09:55.531 "flush": true, 00:09:55.531 "reset": true, 00:09:55.531 "nvme_admin": false, 00:09:55.531 "nvme_io": false, 00:09:55.531 "nvme_io_md": false, 00:09:55.531 "write_zeroes": true, 00:09:55.531 "zcopy": false, 00:09:55.531 "get_zone_info": false, 00:09:55.531 "zone_management": false, 00:09:55.531 "zone_append": false, 00:09:55.531 "compare": false, 00:09:55.531 "compare_and_write": false, 00:09:55.531 "abort": false, 00:09:55.531 "seek_hole": false, 00:09:55.531 "seek_data": false, 00:09:55.531 "copy": false, 00:09:55.531 "nvme_iov_md": false 00:09:55.531 }, 00:09:55.531 "memory_domains": [ 00:09:55.531 { 00:09:55.531 "dma_device_id": "system", 00:09:55.531 "dma_device_type": 1 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.531 "dma_device_type": 2 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "system", 00:09:55.531 "dma_device_type": 1 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.531 "dma_device_type": 2 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "system", 00:09:55.531 "dma_device_type": 1 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.531 "dma_device_type": 2 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "system", 00:09:55.531 "dma_device_type": 1 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.531 "dma_device_type": 2 00:09:55.531 } 00:09:55.531 ], 00:09:55.531 "driver_specific": { 00:09:55.531 "raid": { 00:09:55.531 "uuid": "0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75", 00:09:55.531 "strip_size_kb": 64, 00:09:55.531 "state": "online", 00:09:55.531 "raid_level": "concat", 00:09:55.531 "superblock": true, 00:09:55.531 "num_base_bdevs": 4, 00:09:55.531 "num_base_bdevs_discovered": 4, 00:09:55.531 "num_base_bdevs_operational": 4, 00:09:55.531 "base_bdevs_list": [ 00:09:55.531 { 00:09:55.531 "name": "pt1", 00:09:55.531 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:55.531 "is_configured": true, 00:09:55.531 "data_offset": 2048, 00:09:55.531 "data_size": 63488 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "name": "pt2", 00:09:55.531 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:55.531 "is_configured": true, 00:09:55.531 "data_offset": 2048, 00:09:55.531 "data_size": 63488 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "name": "pt3", 00:09:55.531 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.531 "is_configured": true, 00:09:55.531 "data_offset": 2048, 00:09:55.531 "data_size": 63488 00:09:55.531 }, 00:09:55.531 { 00:09:55.531 "name": "pt4", 00:09:55.531 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:55.531 "is_configured": true, 00:09:55.531 "data_offset": 2048, 00:09:55.531 "data_size": 63488 00:09:55.531 } 00:09:55.531 ] 00:09:55.531 } 00:09:55.531 } 00:09:55.531 }' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:55.531 pt2 00:09:55.531 pt3 00:09:55.531 pt4' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.531 01:54:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:55.791 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:55.792 [2024-12-07 01:54:01.167746] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75 '!=' 0bc3215f-d5f8-4a5f-b8a5-bbd3b27bdd75 ']' 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83193 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 83193 ']' 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 83193 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83193 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:55.792 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.051 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83193' 00:09:56.051 killing process with pid 83193 00:09:56.051 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 83193 00:09:56.051 [2024-12-07 01:54:01.252378] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.051 [2024-12-07 01:54:01.252478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.051 [2024-12-07 01:54:01.252556] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.051 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 83193 00:09:56.051 [2024-12-07 01:54:01.252568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:56.051 [2024-12-07 01:54:01.297179] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.311 01:54:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:56.311 ************************************ 00:09:56.311 END TEST raid_superblock_test 00:09:56.311 ************************************ 00:09:56.311 00:09:56.311 real 0m4.181s 00:09:56.311 user 0m6.599s 00:09:56.311 sys 0m0.910s 00:09:56.311 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.311 01:54:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.311 01:54:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:56.311 01:54:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:56.311 01:54:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.311 01:54:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.311 ************************************ 00:09:56.311 START TEST raid_read_error_test 00:09:56.311 ************************************ 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ntGYZiulet 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83441 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83441 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 83441 ']' 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:56.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:56.311 01:54:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.311 [2024-12-07 01:54:01.704844] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:56.311 [2024-12-07 01:54:01.704985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83441 ] 00:09:56.571 [2024-12-07 01:54:01.831086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.571 [2024-12-07 01:54:01.878088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.571 [2024-12-07 01:54:01.919329] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.571 [2024-12-07 01:54:01.919452] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.141 BaseBdev1_malloc 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.141 true 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.141 [2024-12-07 01:54:02.584592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.141 [2024-12-07 01:54:02.584650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.141 [2024-12-07 01:54:02.584696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:57.141 [2024-12-07 01:54:02.584705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.141 [2024-12-07 01:54:02.586835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.141 [2024-12-07 01:54:02.586918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.141 BaseBdev1 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.141 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.402 BaseBdev2_malloc 00:09:57.402 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 true 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 [2024-12-07 01:54:02.633600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:57.403 [2024-12-07 01:54:02.633715] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.403 [2024-12-07 01:54:02.633740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:57.403 [2024-12-07 01:54:02.633749] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.403 [2024-12-07 01:54:02.635853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.403 [2024-12-07 01:54:02.635897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:57.403 BaseBdev2 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 BaseBdev3_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 true 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 [2024-12-07 01:54:02.674223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:57.403 [2024-12-07 01:54:02.674316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.403 [2024-12-07 01:54:02.674340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:57.403 [2024-12-07 01:54:02.674349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.403 [2024-12-07 01:54:02.676490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.403 [2024-12-07 01:54:02.676537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:57.403 BaseBdev3 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 BaseBdev4_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 true 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 [2024-12-07 01:54:02.714722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:57.403 [2024-12-07 01:54:02.714825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.403 [2024-12-07 01:54:02.714850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:57.403 [2024-12-07 01:54:02.714859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.403 [2024-12-07 01:54:02.717003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.403 [2024-12-07 01:54:02.717070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:57.403 BaseBdev4 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.403 [2024-12-07 01:54:02.726761] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.403 [2024-12-07 01:54:02.728657] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.403 [2024-12-07 01:54:02.728800] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.403 [2024-12-07 01:54:02.728886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.403 [2024-12-07 01:54:02.729112] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:57.403 [2024-12-07 01:54:02.729157] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:57.403 [2024-12-07 01:54:02.729411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:57.403 [2024-12-07 01:54:02.729538] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:57.403 [2024-12-07 01:54:02.729555] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:57.403 [2024-12-07 01:54:02.729685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.403 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:57.404 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.404 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.404 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.404 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.404 "name": "raid_bdev1", 00:09:57.404 "uuid": "c966ecc8-504e-4033-862d-09bb83f57fe7", 00:09:57.404 "strip_size_kb": 64, 00:09:57.404 "state": "online", 00:09:57.404 "raid_level": "concat", 00:09:57.404 "superblock": true, 00:09:57.404 "num_base_bdevs": 4, 00:09:57.404 "num_base_bdevs_discovered": 4, 00:09:57.404 "num_base_bdevs_operational": 4, 00:09:57.404 "base_bdevs_list": [ 00:09:57.404 { 00:09:57.404 "name": "BaseBdev1", 00:09:57.404 "uuid": "58bf48d3-b514-520a-aabe-017a7eb7b003", 00:09:57.404 "is_configured": true, 00:09:57.404 "data_offset": 2048, 00:09:57.404 "data_size": 63488 00:09:57.404 }, 00:09:57.404 { 00:09:57.404 "name": "BaseBdev2", 00:09:57.404 "uuid": "6d256238-84bd-5f09-9985-eda29a6b2153", 00:09:57.404 "is_configured": true, 00:09:57.404 "data_offset": 2048, 00:09:57.404 "data_size": 63488 00:09:57.404 }, 00:09:57.404 { 00:09:57.404 "name": "BaseBdev3", 00:09:57.404 "uuid": "a60d4cd8-237d-5011-903e-6ef89e4fb323", 00:09:57.404 "is_configured": true, 00:09:57.404 "data_offset": 2048, 00:09:57.404 "data_size": 63488 00:09:57.404 }, 00:09:57.404 { 00:09:57.404 "name": "BaseBdev4", 00:09:57.404 "uuid": "c035fd47-f1a2-5ee7-b64f-471daef177cc", 00:09:57.404 "is_configured": true, 00:09:57.404 "data_offset": 2048, 00:09:57.404 "data_size": 63488 00:09:57.404 } 00:09:57.404 ] 00:09:57.404 }' 00:09:57.404 01:54:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.404 01:54:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.983 01:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.983 01:54:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.983 [2024-12-07 01:54:03.290185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.950 "name": "raid_bdev1", 00:09:58.950 "uuid": "c966ecc8-504e-4033-862d-09bb83f57fe7", 00:09:58.950 "strip_size_kb": 64, 00:09:58.950 "state": "online", 00:09:58.950 "raid_level": "concat", 00:09:58.950 "superblock": true, 00:09:58.950 "num_base_bdevs": 4, 00:09:58.950 "num_base_bdevs_discovered": 4, 00:09:58.950 "num_base_bdevs_operational": 4, 00:09:58.950 "base_bdevs_list": [ 00:09:58.950 { 00:09:58.950 "name": "BaseBdev1", 00:09:58.950 "uuid": "58bf48d3-b514-520a-aabe-017a7eb7b003", 00:09:58.950 "is_configured": true, 00:09:58.950 "data_offset": 2048, 00:09:58.950 "data_size": 63488 00:09:58.950 }, 00:09:58.950 { 00:09:58.950 "name": "BaseBdev2", 00:09:58.950 "uuid": "6d256238-84bd-5f09-9985-eda29a6b2153", 00:09:58.950 "is_configured": true, 00:09:58.950 "data_offset": 2048, 00:09:58.950 "data_size": 63488 00:09:58.950 }, 00:09:58.950 { 00:09:58.950 "name": "BaseBdev3", 00:09:58.950 "uuid": "a60d4cd8-237d-5011-903e-6ef89e4fb323", 00:09:58.950 "is_configured": true, 00:09:58.950 "data_offset": 2048, 00:09:58.950 "data_size": 63488 00:09:58.950 }, 00:09:58.950 { 00:09:58.950 "name": "BaseBdev4", 00:09:58.950 "uuid": "c035fd47-f1a2-5ee7-b64f-471daef177cc", 00:09:58.950 "is_configured": true, 00:09:58.950 "data_offset": 2048, 00:09:58.950 "data_size": 63488 00:09:58.950 } 00:09:58.950 ] 00:09:58.950 }' 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.950 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.210 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:59.210 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.210 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.210 [2024-12-07 01:54:04.665991] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:59.210 [2024-12-07 01:54:04.666066] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.210 [2024-12-07 01:54:04.668622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.210 [2024-12-07 01:54:04.668743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.210 [2024-12-07 01:54:04.668812] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.210 [2024-12-07 01:54:04.668872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:59.471 { 00:09:59.471 "results": [ 00:09:59.471 { 00:09:59.471 "job": "raid_bdev1", 00:09:59.471 "core_mask": "0x1", 00:09:59.471 "workload": "randrw", 00:09:59.471 "percentage": 50, 00:09:59.471 "status": "finished", 00:09:59.471 "queue_depth": 1, 00:09:59.471 "io_size": 131072, 00:09:59.471 "runtime": 1.376684, 00:09:59.471 "iops": 16499.06587132559, 00:09:59.471 "mibps": 2062.383233915699, 00:09:59.471 "io_failed": 1, 00:09:59.471 "io_timeout": 0, 00:09:59.471 "avg_latency_us": 84.07496591041259, 00:09:59.471 "min_latency_us": 24.929257641921396, 00:09:59.471 "max_latency_us": 1359.3711790393013 00:09:59.471 } 00:09:59.471 ], 00:09:59.471 "core_count": 1 00:09:59.471 } 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83441 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 83441 ']' 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 83441 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83441 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83441' 00:09:59.471 killing process with pid 83441 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 83441 00:09:59.471 [2024-12-07 01:54:04.702624] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.471 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 83441 00:09:59.471 [2024-12-07 01:54:04.738250] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ntGYZiulet 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:59.731 00:09:59.731 real 0m3.372s 00:09:59.731 user 0m4.296s 00:09:59.731 sys 0m0.529s 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.731 01:54:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.731 ************************************ 00:09:59.731 END TEST raid_read_error_test 00:09:59.731 ************************************ 00:09:59.731 01:54:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:59.731 01:54:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:59.731 01:54:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.731 01:54:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.731 ************************************ 00:09:59.731 START TEST raid_write_error_test 00:09:59.731 ************************************ 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XkPkH0dAoA 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83570 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83570 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 83570 ']' 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.731 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.731 [2024-12-07 01:54:05.154066] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:59.731 [2024-12-07 01:54:05.154281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83570 ] 00:09:59.990 [2024-12-07 01:54:05.298985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.990 [2024-12-07 01:54:05.345020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.990 [2024-12-07 01:54:05.386164] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.990 [2024-12-07 01:54:05.386196] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.559 BaseBdev1_malloc 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.559 01:54:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.559 true 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.559 [2024-12-07 01:54:06.007912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:00.559 [2024-12-07 01:54:06.008015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.559 [2024-12-07 01:54:06.008072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:00.559 [2024-12-07 01:54:06.008140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.559 [2024-12-07 01:54:06.010306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.559 [2024-12-07 01:54:06.010377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:00.559 BaseBdev1 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.559 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 BaseBdev2_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 true 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 [2024-12-07 01:54:06.059807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:00.819 [2024-12-07 01:54:06.059956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.819 [2024-12-07 01:54:06.059998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:00.819 [2024-12-07 01:54:06.060033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.819 [2024-12-07 01:54:06.062194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.819 [2024-12-07 01:54:06.062280] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:00.819 BaseBdev2 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 BaseBdev3_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 true 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 [2024-12-07 01:54:06.100453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:00.819 [2024-12-07 01:54:06.100555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.819 [2024-12-07 01:54:06.100590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:00.819 [2024-12-07 01:54:06.100617] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.819 [2024-12-07 01:54:06.102714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.819 [2024-12-07 01:54:06.102776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:00.819 BaseBdev3 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 BaseBdev4_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 true 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 [2024-12-07 01:54:06.140786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:00.819 [2024-12-07 01:54:06.140869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.819 [2024-12-07 01:54:06.140921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:00.819 [2024-12-07 01:54:06.140949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.819 [2024-12-07 01:54:06.142937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.819 [2024-12-07 01:54:06.143000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:00.819 BaseBdev4 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.819 [2024-12-07 01:54:06.152837] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.819 [2024-12-07 01:54:06.154612] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.819 [2024-12-07 01:54:06.154739] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.819 [2024-12-07 01:54:06.154824] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.819 [2024-12-07 01:54:06.155067] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:00.819 [2024-12-07 01:54:06.155114] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:00.819 [2024-12-07 01:54:06.155353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:00.819 [2024-12-07 01:54:06.155475] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:00.819 [2024-12-07 01:54:06.155488] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:00.819 [2024-12-07 01:54:06.155617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.819 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.820 "name": "raid_bdev1", 00:10:00.820 "uuid": "1c5774fc-8c75-45fb-9a91-1903963e71bf", 00:10:00.820 "strip_size_kb": 64, 00:10:00.820 "state": "online", 00:10:00.820 "raid_level": "concat", 00:10:00.820 "superblock": true, 00:10:00.820 "num_base_bdevs": 4, 00:10:00.820 "num_base_bdevs_discovered": 4, 00:10:00.820 "num_base_bdevs_operational": 4, 00:10:00.820 "base_bdevs_list": [ 00:10:00.820 { 00:10:00.820 "name": "BaseBdev1", 00:10:00.820 "uuid": "87ca4003-f6e9-5df6-b096-0e3671df1463", 00:10:00.820 "is_configured": true, 00:10:00.820 "data_offset": 2048, 00:10:00.820 "data_size": 63488 00:10:00.820 }, 00:10:00.820 { 00:10:00.820 "name": "BaseBdev2", 00:10:00.820 "uuid": "5edc6b25-7dc3-5810-9805-f5e8c090baa8", 00:10:00.820 "is_configured": true, 00:10:00.820 "data_offset": 2048, 00:10:00.820 "data_size": 63488 00:10:00.820 }, 00:10:00.820 { 00:10:00.820 "name": "BaseBdev3", 00:10:00.820 "uuid": "e436848f-2691-50e3-b4ab-8fc5f70d0e5e", 00:10:00.820 "is_configured": true, 00:10:00.820 "data_offset": 2048, 00:10:00.820 "data_size": 63488 00:10:00.820 }, 00:10:00.820 { 00:10:00.820 "name": "BaseBdev4", 00:10:00.820 "uuid": "731d00af-cd55-59df-8828-6e176dfaeee5", 00:10:00.820 "is_configured": true, 00:10:00.820 "data_offset": 2048, 00:10:00.820 "data_size": 63488 00:10:00.820 } 00:10:00.820 ] 00:10:00.820 }' 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.820 01:54:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.388 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:01.388 01:54:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:01.388 [2024-12-07 01:54:06.668351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.326 "name": "raid_bdev1", 00:10:02.326 "uuid": "1c5774fc-8c75-45fb-9a91-1903963e71bf", 00:10:02.326 "strip_size_kb": 64, 00:10:02.326 "state": "online", 00:10:02.326 "raid_level": "concat", 00:10:02.326 "superblock": true, 00:10:02.326 "num_base_bdevs": 4, 00:10:02.326 "num_base_bdevs_discovered": 4, 00:10:02.326 "num_base_bdevs_operational": 4, 00:10:02.326 "base_bdevs_list": [ 00:10:02.326 { 00:10:02.326 "name": "BaseBdev1", 00:10:02.326 "uuid": "87ca4003-f6e9-5df6-b096-0e3671df1463", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": "BaseBdev2", 00:10:02.326 "uuid": "5edc6b25-7dc3-5810-9805-f5e8c090baa8", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": "BaseBdev3", 00:10:02.326 "uuid": "e436848f-2691-50e3-b4ab-8fc5f70d0e5e", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": "BaseBdev4", 00:10:02.326 "uuid": "731d00af-cd55-59df-8828-6e176dfaeee5", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 } 00:10:02.326 ] 00:10:02.326 }' 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.326 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.586 [2024-12-07 01:54:07.975888] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.586 [2024-12-07 01:54:07.975962] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.586 [2024-12-07 01:54:07.978542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.586 [2024-12-07 01:54:07.978650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.586 [2024-12-07 01:54:07.978724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.586 [2024-12-07 01:54:07.978799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:02.586 { 00:10:02.586 "results": [ 00:10:02.586 { 00:10:02.586 "job": "raid_bdev1", 00:10:02.586 "core_mask": "0x1", 00:10:02.586 "workload": "randrw", 00:10:02.586 "percentage": 50, 00:10:02.586 "status": "finished", 00:10:02.586 "queue_depth": 1, 00:10:02.586 "io_size": 131072, 00:10:02.586 "runtime": 1.308269, 00:10:02.586 "iops": 16861.211264655816, 00:10:02.586 "mibps": 2107.651408081977, 00:10:02.586 "io_failed": 1, 00:10:02.586 "io_timeout": 0, 00:10:02.586 "avg_latency_us": 82.2263897983665, 00:10:02.586 "min_latency_us": 24.817467248908297, 00:10:02.586 "max_latency_us": 1352.216593886463 00:10:02.586 } 00:10:02.586 ], 00:10:02.586 "core_count": 1 00:10:02.586 } 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83570 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 83570 ']' 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 83570 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:02.586 01:54:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83570 00:10:02.586 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:02.586 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:02.586 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83570' 00:10:02.586 killing process with pid 83570 00:10:02.586 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 83570 00:10:02.586 [2024-12-07 01:54:08.006992] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.586 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 83570 00:10:02.586 [2024-12-07 01:54:08.041920] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.845 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XkPkH0dAoA 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:02.846 ************************************ 00:10:02.846 END TEST raid_write_error_test 00:10:02.846 ************************************ 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:02.846 00:10:02.846 real 0m3.226s 00:10:02.846 user 0m4.000s 00:10:02.846 sys 0m0.518s 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.846 01:54:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.104 01:54:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:03.104 01:54:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:10:03.104 01:54:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:03.104 01:54:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.104 01:54:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:03.104 ************************************ 00:10:03.104 START TEST raid_state_function_test 00:10:03.104 ************************************ 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:03.104 Process raid pid: 83698 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83698 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83698' 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83698 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83698 ']' 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.104 01:54:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.104 [2024-12-07 01:54:08.443359] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:03.104 [2024-12-07 01:54:08.443592] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.363 [2024-12-07 01:54:08.587345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.363 [2024-12-07 01:54:08.631017] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.363 [2024-12-07 01:54:08.672589] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.363 [2024-12-07 01:54:08.672678] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.933 [2024-12-07 01:54:09.269434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.933 [2024-12-07 01:54:09.269529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.933 [2024-12-07 01:54:09.269560] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.933 [2024-12-07 01:54:09.269583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.933 [2024-12-07 01:54:09.269600] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.933 [2024-12-07 01:54:09.269624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.933 [2024-12-07 01:54:09.269641] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.933 [2024-12-07 01:54:09.269670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.933 "name": "Existed_Raid", 00:10:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.933 "strip_size_kb": 0, 00:10:03.933 "state": "configuring", 00:10:03.933 "raid_level": "raid1", 00:10:03.933 "superblock": false, 00:10:03.933 "num_base_bdevs": 4, 00:10:03.933 "num_base_bdevs_discovered": 0, 00:10:03.933 "num_base_bdevs_operational": 4, 00:10:03.933 "base_bdevs_list": [ 00:10:03.933 { 00:10:03.933 "name": "BaseBdev1", 00:10:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.933 "is_configured": false, 00:10:03.933 "data_offset": 0, 00:10:03.933 "data_size": 0 00:10:03.933 }, 00:10:03.933 { 00:10:03.933 "name": "BaseBdev2", 00:10:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.933 "is_configured": false, 00:10:03.933 "data_offset": 0, 00:10:03.933 "data_size": 0 00:10:03.933 }, 00:10:03.933 { 00:10:03.933 "name": "BaseBdev3", 00:10:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.933 "is_configured": false, 00:10:03.933 "data_offset": 0, 00:10:03.933 "data_size": 0 00:10:03.933 }, 00:10:03.933 { 00:10:03.933 "name": "BaseBdev4", 00:10:03.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.933 "is_configured": false, 00:10:03.933 "data_offset": 0, 00:10:03.933 "data_size": 0 00:10:03.933 } 00:10:03.933 ] 00:10:03.933 }' 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.933 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 [2024-12-07 01:54:09.744529] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.501 [2024-12-07 01:54:09.744608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 [2024-12-07 01:54:09.756512] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:04.501 [2024-12-07 01:54:09.756612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:04.501 [2024-12-07 01:54:09.756624] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:04.501 [2024-12-07 01:54:09.756632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:04.501 [2024-12-07 01:54:09.756638] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:04.501 [2024-12-07 01:54:09.756646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:04.501 [2024-12-07 01:54:09.756652] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:04.501 [2024-12-07 01:54:09.756661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.501 [2024-12-07 01:54:09.777139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.501 BaseBdev1 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:04.501 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.502 [ 00:10:04.502 { 00:10:04.502 "name": "BaseBdev1", 00:10:04.502 "aliases": [ 00:10:04.502 "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a" 00:10:04.502 ], 00:10:04.502 "product_name": "Malloc disk", 00:10:04.502 "block_size": 512, 00:10:04.502 "num_blocks": 65536, 00:10:04.502 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:04.502 "assigned_rate_limits": { 00:10:04.502 "rw_ios_per_sec": 0, 00:10:04.502 "rw_mbytes_per_sec": 0, 00:10:04.502 "r_mbytes_per_sec": 0, 00:10:04.502 "w_mbytes_per_sec": 0 00:10:04.502 }, 00:10:04.502 "claimed": true, 00:10:04.502 "claim_type": "exclusive_write", 00:10:04.502 "zoned": false, 00:10:04.502 "supported_io_types": { 00:10:04.502 "read": true, 00:10:04.502 "write": true, 00:10:04.502 "unmap": true, 00:10:04.502 "flush": true, 00:10:04.502 "reset": true, 00:10:04.502 "nvme_admin": false, 00:10:04.502 "nvme_io": false, 00:10:04.502 "nvme_io_md": false, 00:10:04.502 "write_zeroes": true, 00:10:04.502 "zcopy": true, 00:10:04.502 "get_zone_info": false, 00:10:04.502 "zone_management": false, 00:10:04.502 "zone_append": false, 00:10:04.502 "compare": false, 00:10:04.502 "compare_and_write": false, 00:10:04.502 "abort": true, 00:10:04.502 "seek_hole": false, 00:10:04.502 "seek_data": false, 00:10:04.502 "copy": true, 00:10:04.502 "nvme_iov_md": false 00:10:04.502 }, 00:10:04.502 "memory_domains": [ 00:10:04.502 { 00:10:04.502 "dma_device_id": "system", 00:10:04.502 "dma_device_type": 1 00:10:04.502 }, 00:10:04.502 { 00:10:04.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.502 "dma_device_type": 2 00:10:04.502 } 00:10:04.502 ], 00:10:04.502 "driver_specific": {} 00:10:04.502 } 00:10:04.502 ] 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.502 "name": "Existed_Raid", 00:10:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.502 "strip_size_kb": 0, 00:10:04.502 "state": "configuring", 00:10:04.502 "raid_level": "raid1", 00:10:04.502 "superblock": false, 00:10:04.502 "num_base_bdevs": 4, 00:10:04.502 "num_base_bdevs_discovered": 1, 00:10:04.502 "num_base_bdevs_operational": 4, 00:10:04.502 "base_bdevs_list": [ 00:10:04.502 { 00:10:04.502 "name": "BaseBdev1", 00:10:04.502 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:04.502 "is_configured": true, 00:10:04.502 "data_offset": 0, 00:10:04.502 "data_size": 65536 00:10:04.502 }, 00:10:04.502 { 00:10:04.502 "name": "BaseBdev2", 00:10:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.502 "is_configured": false, 00:10:04.502 "data_offset": 0, 00:10:04.502 "data_size": 0 00:10:04.502 }, 00:10:04.502 { 00:10:04.502 "name": "BaseBdev3", 00:10:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.502 "is_configured": false, 00:10:04.502 "data_offset": 0, 00:10:04.502 "data_size": 0 00:10:04.502 }, 00:10:04.502 { 00:10:04.502 "name": "BaseBdev4", 00:10:04.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.502 "is_configured": false, 00:10:04.502 "data_offset": 0, 00:10:04.502 "data_size": 0 00:10:04.502 } 00:10:04.502 ] 00:10:04.502 }' 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.502 01:54:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.070 [2024-12-07 01:54:10.260401] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.070 [2024-12-07 01:54:10.260507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.070 [2024-12-07 01:54:10.272421] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:05.070 [2024-12-07 01:54:10.274276] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:05.070 [2024-12-07 01:54:10.274349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:05.070 [2024-12-07 01:54:10.274377] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:05.070 [2024-12-07 01:54:10.274399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:05.070 [2024-12-07 01:54:10.274417] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:05.070 [2024-12-07 01:54:10.274436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.070 "name": "Existed_Raid", 00:10:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.070 "strip_size_kb": 0, 00:10:05.070 "state": "configuring", 00:10:05.070 "raid_level": "raid1", 00:10:05.070 "superblock": false, 00:10:05.070 "num_base_bdevs": 4, 00:10:05.070 "num_base_bdevs_discovered": 1, 00:10:05.070 "num_base_bdevs_operational": 4, 00:10:05.070 "base_bdevs_list": [ 00:10:05.070 { 00:10:05.070 "name": "BaseBdev1", 00:10:05.070 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:05.070 "is_configured": true, 00:10:05.070 "data_offset": 0, 00:10:05.070 "data_size": 65536 00:10:05.070 }, 00:10:05.070 { 00:10:05.070 "name": "BaseBdev2", 00:10:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.070 "is_configured": false, 00:10:05.070 "data_offset": 0, 00:10:05.070 "data_size": 0 00:10:05.070 }, 00:10:05.070 { 00:10:05.070 "name": "BaseBdev3", 00:10:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.070 "is_configured": false, 00:10:05.070 "data_offset": 0, 00:10:05.070 "data_size": 0 00:10:05.070 }, 00:10:05.070 { 00:10:05.070 "name": "BaseBdev4", 00:10:05.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.070 "is_configured": false, 00:10:05.070 "data_offset": 0, 00:10:05.070 "data_size": 0 00:10:05.070 } 00:10:05.070 ] 00:10:05.070 }' 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.070 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.330 [2024-12-07 01:54:10.680609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.330 BaseBdev2 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.330 [ 00:10:05.330 { 00:10:05.330 "name": "BaseBdev2", 00:10:05.330 "aliases": [ 00:10:05.330 "f03dd8af-58ee-4590-a44a-734899ea0c1d" 00:10:05.330 ], 00:10:05.330 "product_name": "Malloc disk", 00:10:05.330 "block_size": 512, 00:10:05.330 "num_blocks": 65536, 00:10:05.330 "uuid": "f03dd8af-58ee-4590-a44a-734899ea0c1d", 00:10:05.330 "assigned_rate_limits": { 00:10:05.330 "rw_ios_per_sec": 0, 00:10:05.330 "rw_mbytes_per_sec": 0, 00:10:05.330 "r_mbytes_per_sec": 0, 00:10:05.330 "w_mbytes_per_sec": 0 00:10:05.330 }, 00:10:05.330 "claimed": true, 00:10:05.330 "claim_type": "exclusive_write", 00:10:05.330 "zoned": false, 00:10:05.330 "supported_io_types": { 00:10:05.330 "read": true, 00:10:05.330 "write": true, 00:10:05.330 "unmap": true, 00:10:05.330 "flush": true, 00:10:05.330 "reset": true, 00:10:05.330 "nvme_admin": false, 00:10:05.330 "nvme_io": false, 00:10:05.330 "nvme_io_md": false, 00:10:05.330 "write_zeroes": true, 00:10:05.330 "zcopy": true, 00:10:05.330 "get_zone_info": false, 00:10:05.330 "zone_management": false, 00:10:05.330 "zone_append": false, 00:10:05.330 "compare": false, 00:10:05.330 "compare_and_write": false, 00:10:05.330 "abort": true, 00:10:05.330 "seek_hole": false, 00:10:05.330 "seek_data": false, 00:10:05.330 "copy": true, 00:10:05.330 "nvme_iov_md": false 00:10:05.330 }, 00:10:05.330 "memory_domains": [ 00:10:05.330 { 00:10:05.330 "dma_device_id": "system", 00:10:05.330 "dma_device_type": 1 00:10:05.330 }, 00:10:05.330 { 00:10:05.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.330 "dma_device_type": 2 00:10:05.330 } 00:10:05.330 ], 00:10:05.330 "driver_specific": {} 00:10:05.330 } 00:10:05.330 ] 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.330 "name": "Existed_Raid", 00:10:05.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.330 "strip_size_kb": 0, 00:10:05.330 "state": "configuring", 00:10:05.330 "raid_level": "raid1", 00:10:05.330 "superblock": false, 00:10:05.330 "num_base_bdevs": 4, 00:10:05.330 "num_base_bdevs_discovered": 2, 00:10:05.330 "num_base_bdevs_operational": 4, 00:10:05.330 "base_bdevs_list": [ 00:10:05.330 { 00:10:05.330 "name": "BaseBdev1", 00:10:05.330 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:05.330 "is_configured": true, 00:10:05.330 "data_offset": 0, 00:10:05.330 "data_size": 65536 00:10:05.330 }, 00:10:05.330 { 00:10:05.330 "name": "BaseBdev2", 00:10:05.330 "uuid": "f03dd8af-58ee-4590-a44a-734899ea0c1d", 00:10:05.330 "is_configured": true, 00:10:05.330 "data_offset": 0, 00:10:05.330 "data_size": 65536 00:10:05.330 }, 00:10:05.330 { 00:10:05.330 "name": "BaseBdev3", 00:10:05.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.330 "is_configured": false, 00:10:05.330 "data_offset": 0, 00:10:05.330 "data_size": 0 00:10:05.330 }, 00:10:05.330 { 00:10:05.330 "name": "BaseBdev4", 00:10:05.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.330 "is_configured": false, 00:10:05.330 "data_offset": 0, 00:10:05.330 "data_size": 0 00:10:05.330 } 00:10:05.330 ] 00:10:05.330 }' 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.330 01:54:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.901 [2024-12-07 01:54:11.130589] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.901 BaseBdev3 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.901 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.902 [ 00:10:05.902 { 00:10:05.902 "name": "BaseBdev3", 00:10:05.902 "aliases": [ 00:10:05.902 "b9e46a4c-b10e-4056-a4a9-5b4b0d97dd64" 00:10:05.902 ], 00:10:05.902 "product_name": "Malloc disk", 00:10:05.902 "block_size": 512, 00:10:05.902 "num_blocks": 65536, 00:10:05.902 "uuid": "b9e46a4c-b10e-4056-a4a9-5b4b0d97dd64", 00:10:05.902 "assigned_rate_limits": { 00:10:05.902 "rw_ios_per_sec": 0, 00:10:05.902 "rw_mbytes_per_sec": 0, 00:10:05.902 "r_mbytes_per_sec": 0, 00:10:05.902 "w_mbytes_per_sec": 0 00:10:05.902 }, 00:10:05.902 "claimed": true, 00:10:05.902 "claim_type": "exclusive_write", 00:10:05.902 "zoned": false, 00:10:05.902 "supported_io_types": { 00:10:05.902 "read": true, 00:10:05.902 "write": true, 00:10:05.902 "unmap": true, 00:10:05.902 "flush": true, 00:10:05.902 "reset": true, 00:10:05.902 "nvme_admin": false, 00:10:05.902 "nvme_io": false, 00:10:05.902 "nvme_io_md": false, 00:10:05.902 "write_zeroes": true, 00:10:05.902 "zcopy": true, 00:10:05.902 "get_zone_info": false, 00:10:05.902 "zone_management": false, 00:10:05.902 "zone_append": false, 00:10:05.902 "compare": false, 00:10:05.902 "compare_and_write": false, 00:10:05.902 "abort": true, 00:10:05.902 "seek_hole": false, 00:10:05.902 "seek_data": false, 00:10:05.902 "copy": true, 00:10:05.902 "nvme_iov_md": false 00:10:05.902 }, 00:10:05.902 "memory_domains": [ 00:10:05.902 { 00:10:05.902 "dma_device_id": "system", 00:10:05.902 "dma_device_type": 1 00:10:05.902 }, 00:10:05.902 { 00:10:05.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.902 "dma_device_type": 2 00:10:05.902 } 00:10:05.902 ], 00:10:05.902 "driver_specific": {} 00:10:05.902 } 00:10:05.902 ] 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.902 "name": "Existed_Raid", 00:10:05.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.902 "strip_size_kb": 0, 00:10:05.902 "state": "configuring", 00:10:05.902 "raid_level": "raid1", 00:10:05.902 "superblock": false, 00:10:05.902 "num_base_bdevs": 4, 00:10:05.902 "num_base_bdevs_discovered": 3, 00:10:05.902 "num_base_bdevs_operational": 4, 00:10:05.902 "base_bdevs_list": [ 00:10:05.902 { 00:10:05.902 "name": "BaseBdev1", 00:10:05.902 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:05.902 "is_configured": true, 00:10:05.902 "data_offset": 0, 00:10:05.902 "data_size": 65536 00:10:05.902 }, 00:10:05.902 { 00:10:05.902 "name": "BaseBdev2", 00:10:05.902 "uuid": "f03dd8af-58ee-4590-a44a-734899ea0c1d", 00:10:05.902 "is_configured": true, 00:10:05.902 "data_offset": 0, 00:10:05.902 "data_size": 65536 00:10:05.902 }, 00:10:05.902 { 00:10:05.902 "name": "BaseBdev3", 00:10:05.902 "uuid": "b9e46a4c-b10e-4056-a4a9-5b4b0d97dd64", 00:10:05.902 "is_configured": true, 00:10:05.902 "data_offset": 0, 00:10:05.902 "data_size": 65536 00:10:05.902 }, 00:10:05.902 { 00:10:05.902 "name": "BaseBdev4", 00:10:05.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.902 "is_configured": false, 00:10:05.902 "data_offset": 0, 00:10:05.902 "data_size": 0 00:10:05.902 } 00:10:05.902 ] 00:10:05.902 }' 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.902 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.167 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 [2024-12-07 01:54:11.584677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.168 [2024-12-07 01:54:11.584802] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:06.168 [2024-12-07 01:54:11.584830] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:06.168 [2024-12-07 01:54:11.585162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:06.168 [2024-12-07 01:54:11.585357] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:06.168 [2024-12-07 01:54:11.585401] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:06.168 [2024-12-07 01:54:11.585634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.168 BaseBdev4 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.168 [ 00:10:06.168 { 00:10:06.168 "name": "BaseBdev4", 00:10:06.168 "aliases": [ 00:10:06.168 "f93e250f-5279-4ef6-b2b1-b874f42dec03" 00:10:06.168 ], 00:10:06.168 "product_name": "Malloc disk", 00:10:06.168 "block_size": 512, 00:10:06.168 "num_blocks": 65536, 00:10:06.168 "uuid": "f93e250f-5279-4ef6-b2b1-b874f42dec03", 00:10:06.168 "assigned_rate_limits": { 00:10:06.168 "rw_ios_per_sec": 0, 00:10:06.168 "rw_mbytes_per_sec": 0, 00:10:06.168 "r_mbytes_per_sec": 0, 00:10:06.168 "w_mbytes_per_sec": 0 00:10:06.168 }, 00:10:06.168 "claimed": true, 00:10:06.168 "claim_type": "exclusive_write", 00:10:06.168 "zoned": false, 00:10:06.168 "supported_io_types": { 00:10:06.168 "read": true, 00:10:06.168 "write": true, 00:10:06.168 "unmap": true, 00:10:06.168 "flush": true, 00:10:06.168 "reset": true, 00:10:06.168 "nvme_admin": false, 00:10:06.168 "nvme_io": false, 00:10:06.168 "nvme_io_md": false, 00:10:06.168 "write_zeroes": true, 00:10:06.168 "zcopy": true, 00:10:06.168 "get_zone_info": false, 00:10:06.168 "zone_management": false, 00:10:06.168 "zone_append": false, 00:10:06.168 "compare": false, 00:10:06.168 "compare_and_write": false, 00:10:06.168 "abort": true, 00:10:06.168 "seek_hole": false, 00:10:06.168 "seek_data": false, 00:10:06.168 "copy": true, 00:10:06.168 "nvme_iov_md": false 00:10:06.168 }, 00:10:06.168 "memory_domains": [ 00:10:06.168 { 00:10:06.168 "dma_device_id": "system", 00:10:06.168 "dma_device_type": 1 00:10:06.168 }, 00:10:06.168 { 00:10:06.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.168 "dma_device_type": 2 00:10:06.168 } 00:10:06.168 ], 00:10:06.168 "driver_specific": {} 00:10:06.168 } 00:10:06.168 ] 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.168 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.427 "name": "Existed_Raid", 00:10:06.427 "uuid": "ac233000-78af-4079-a88b-9c14ffe57597", 00:10:06.427 "strip_size_kb": 0, 00:10:06.427 "state": "online", 00:10:06.427 "raid_level": "raid1", 00:10:06.427 "superblock": false, 00:10:06.427 "num_base_bdevs": 4, 00:10:06.427 "num_base_bdevs_discovered": 4, 00:10:06.427 "num_base_bdevs_operational": 4, 00:10:06.427 "base_bdevs_list": [ 00:10:06.427 { 00:10:06.427 "name": "BaseBdev1", 00:10:06.427 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:06.427 "is_configured": true, 00:10:06.427 "data_offset": 0, 00:10:06.427 "data_size": 65536 00:10:06.427 }, 00:10:06.427 { 00:10:06.427 "name": "BaseBdev2", 00:10:06.427 "uuid": "f03dd8af-58ee-4590-a44a-734899ea0c1d", 00:10:06.427 "is_configured": true, 00:10:06.427 "data_offset": 0, 00:10:06.427 "data_size": 65536 00:10:06.427 }, 00:10:06.427 { 00:10:06.427 "name": "BaseBdev3", 00:10:06.427 "uuid": "b9e46a4c-b10e-4056-a4a9-5b4b0d97dd64", 00:10:06.427 "is_configured": true, 00:10:06.427 "data_offset": 0, 00:10:06.427 "data_size": 65536 00:10:06.427 }, 00:10:06.427 { 00:10:06.427 "name": "BaseBdev4", 00:10:06.427 "uuid": "f93e250f-5279-4ef6-b2b1-b874f42dec03", 00:10:06.427 "is_configured": true, 00:10:06.427 "data_offset": 0, 00:10:06.427 "data_size": 65536 00:10:06.427 } 00:10:06.427 ] 00:10:06.427 }' 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.427 01:54:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.686 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.686 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.686 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.687 [2024-12-07 01:54:12.060266] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.687 "name": "Existed_Raid", 00:10:06.687 "aliases": [ 00:10:06.687 "ac233000-78af-4079-a88b-9c14ffe57597" 00:10:06.687 ], 00:10:06.687 "product_name": "Raid Volume", 00:10:06.687 "block_size": 512, 00:10:06.687 "num_blocks": 65536, 00:10:06.687 "uuid": "ac233000-78af-4079-a88b-9c14ffe57597", 00:10:06.687 "assigned_rate_limits": { 00:10:06.687 "rw_ios_per_sec": 0, 00:10:06.687 "rw_mbytes_per_sec": 0, 00:10:06.687 "r_mbytes_per_sec": 0, 00:10:06.687 "w_mbytes_per_sec": 0 00:10:06.687 }, 00:10:06.687 "claimed": false, 00:10:06.687 "zoned": false, 00:10:06.687 "supported_io_types": { 00:10:06.687 "read": true, 00:10:06.687 "write": true, 00:10:06.687 "unmap": false, 00:10:06.687 "flush": false, 00:10:06.687 "reset": true, 00:10:06.687 "nvme_admin": false, 00:10:06.687 "nvme_io": false, 00:10:06.687 "nvme_io_md": false, 00:10:06.687 "write_zeroes": true, 00:10:06.687 "zcopy": false, 00:10:06.687 "get_zone_info": false, 00:10:06.687 "zone_management": false, 00:10:06.687 "zone_append": false, 00:10:06.687 "compare": false, 00:10:06.687 "compare_and_write": false, 00:10:06.687 "abort": false, 00:10:06.687 "seek_hole": false, 00:10:06.687 "seek_data": false, 00:10:06.687 "copy": false, 00:10:06.687 "nvme_iov_md": false 00:10:06.687 }, 00:10:06.687 "memory_domains": [ 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "system", 00:10:06.687 "dma_device_type": 1 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.687 "dma_device_type": 2 00:10:06.687 } 00:10:06.687 ], 00:10:06.687 "driver_specific": { 00:10:06.687 "raid": { 00:10:06.687 "uuid": "ac233000-78af-4079-a88b-9c14ffe57597", 00:10:06.687 "strip_size_kb": 0, 00:10:06.687 "state": "online", 00:10:06.687 "raid_level": "raid1", 00:10:06.687 "superblock": false, 00:10:06.687 "num_base_bdevs": 4, 00:10:06.687 "num_base_bdevs_discovered": 4, 00:10:06.687 "num_base_bdevs_operational": 4, 00:10:06.687 "base_bdevs_list": [ 00:10:06.687 { 00:10:06.687 "name": "BaseBdev1", 00:10:06.687 "uuid": "3e96cd6b-fb96-4f08-8c3b-623f07a9da3a", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "name": "BaseBdev2", 00:10:06.687 "uuid": "f03dd8af-58ee-4590-a44a-734899ea0c1d", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "name": "BaseBdev3", 00:10:06.687 "uuid": "b9e46a4c-b10e-4056-a4a9-5b4b0d97dd64", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 }, 00:10:06.687 { 00:10:06.687 "name": "BaseBdev4", 00:10:06.687 "uuid": "f93e250f-5279-4ef6-b2b1-b874f42dec03", 00:10:06.687 "is_configured": true, 00:10:06.687 "data_offset": 0, 00:10:06.687 "data_size": 65536 00:10:06.687 } 00:10:06.687 ] 00:10:06.687 } 00:10:06.687 } 00:10:06.687 }' 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:06.687 BaseBdev2 00:10:06.687 BaseBdev3 00:10:06.687 BaseBdev4' 00:10:06.687 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.946 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.946 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.946 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:06.946 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.946 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.946 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.947 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.947 [2024-12-07 01:54:12.395368] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.206 "name": "Existed_Raid", 00:10:07.206 "uuid": "ac233000-78af-4079-a88b-9c14ffe57597", 00:10:07.206 "strip_size_kb": 0, 00:10:07.206 "state": "online", 00:10:07.206 "raid_level": "raid1", 00:10:07.206 "superblock": false, 00:10:07.206 "num_base_bdevs": 4, 00:10:07.206 "num_base_bdevs_discovered": 3, 00:10:07.206 "num_base_bdevs_operational": 3, 00:10:07.206 "base_bdevs_list": [ 00:10:07.206 { 00:10:07.206 "name": null, 00:10:07.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.206 "is_configured": false, 00:10:07.206 "data_offset": 0, 00:10:07.206 "data_size": 65536 00:10:07.206 }, 00:10:07.206 { 00:10:07.206 "name": "BaseBdev2", 00:10:07.206 "uuid": "f03dd8af-58ee-4590-a44a-734899ea0c1d", 00:10:07.206 "is_configured": true, 00:10:07.206 "data_offset": 0, 00:10:07.206 "data_size": 65536 00:10:07.206 }, 00:10:07.206 { 00:10:07.206 "name": "BaseBdev3", 00:10:07.206 "uuid": "b9e46a4c-b10e-4056-a4a9-5b4b0d97dd64", 00:10:07.206 "is_configured": true, 00:10:07.206 "data_offset": 0, 00:10:07.206 "data_size": 65536 00:10:07.206 }, 00:10:07.206 { 00:10:07.206 "name": "BaseBdev4", 00:10:07.206 "uuid": "f93e250f-5279-4ef6-b2b1-b874f42dec03", 00:10:07.206 "is_configured": true, 00:10:07.206 "data_offset": 0, 00:10:07.206 "data_size": 65536 00:10:07.206 } 00:10:07.206 ] 00:10:07.206 }' 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.206 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.466 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.466 [2024-12-07 01:54:12.921856] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 [2024-12-07 01:54:12.977067] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 01:54:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 [2024-12-07 01:54:13.043729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:07.726 [2024-12-07 01:54:13.043853] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.726 [2024-12-07 01:54:13.055309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.726 [2024-12-07 01:54:13.055418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.726 [2024-12-07 01:54:13.055461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 BaseBdev2 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 [ 00:10:07.726 { 00:10:07.726 "name": "BaseBdev2", 00:10:07.726 "aliases": [ 00:10:07.726 "c98484ee-e95a-4770-86c8-f0808e9c5383" 00:10:07.726 ], 00:10:07.726 "product_name": "Malloc disk", 00:10:07.726 "block_size": 512, 00:10:07.726 "num_blocks": 65536, 00:10:07.726 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:07.726 "assigned_rate_limits": { 00:10:07.726 "rw_ios_per_sec": 0, 00:10:07.726 "rw_mbytes_per_sec": 0, 00:10:07.726 "r_mbytes_per_sec": 0, 00:10:07.726 "w_mbytes_per_sec": 0 00:10:07.726 }, 00:10:07.726 "claimed": false, 00:10:07.726 "zoned": false, 00:10:07.726 "supported_io_types": { 00:10:07.726 "read": true, 00:10:07.726 "write": true, 00:10:07.726 "unmap": true, 00:10:07.726 "flush": true, 00:10:07.726 "reset": true, 00:10:07.726 "nvme_admin": false, 00:10:07.726 "nvme_io": false, 00:10:07.726 "nvme_io_md": false, 00:10:07.726 "write_zeroes": true, 00:10:07.726 "zcopy": true, 00:10:07.726 "get_zone_info": false, 00:10:07.726 "zone_management": false, 00:10:07.726 "zone_append": false, 00:10:07.726 "compare": false, 00:10:07.726 "compare_and_write": false, 00:10:07.726 "abort": true, 00:10:07.726 "seek_hole": false, 00:10:07.726 "seek_data": false, 00:10:07.726 "copy": true, 00:10:07.726 "nvme_iov_md": false 00:10:07.726 }, 00:10:07.726 "memory_domains": [ 00:10:07.726 { 00:10:07.726 "dma_device_id": "system", 00:10:07.726 "dma_device_type": 1 00:10:07.726 }, 00:10:07.726 { 00:10:07.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.726 "dma_device_type": 2 00:10:07.726 } 00:10:07.726 ], 00:10:07.726 "driver_specific": {} 00:10:07.726 } 00:10:07.726 ] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 BaseBdev3 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.726 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:07.727 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.727 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.986 [ 00:10:07.986 { 00:10:07.986 "name": "BaseBdev3", 00:10:07.986 "aliases": [ 00:10:07.986 "160dbd11-dffe-4ec5-a81c-9e7c07d01268" 00:10:07.986 ], 00:10:07.986 "product_name": "Malloc disk", 00:10:07.986 "block_size": 512, 00:10:07.986 "num_blocks": 65536, 00:10:07.986 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:07.986 "assigned_rate_limits": { 00:10:07.986 "rw_ios_per_sec": 0, 00:10:07.986 "rw_mbytes_per_sec": 0, 00:10:07.986 "r_mbytes_per_sec": 0, 00:10:07.986 "w_mbytes_per_sec": 0 00:10:07.986 }, 00:10:07.986 "claimed": false, 00:10:07.986 "zoned": false, 00:10:07.986 "supported_io_types": { 00:10:07.986 "read": true, 00:10:07.986 "write": true, 00:10:07.986 "unmap": true, 00:10:07.986 "flush": true, 00:10:07.986 "reset": true, 00:10:07.986 "nvme_admin": false, 00:10:07.986 "nvme_io": false, 00:10:07.986 "nvme_io_md": false, 00:10:07.986 "write_zeroes": true, 00:10:07.986 "zcopy": true, 00:10:07.986 "get_zone_info": false, 00:10:07.986 "zone_management": false, 00:10:07.986 "zone_append": false, 00:10:07.986 "compare": false, 00:10:07.986 "compare_and_write": false, 00:10:07.986 "abort": true, 00:10:07.986 "seek_hole": false, 00:10:07.986 "seek_data": false, 00:10:07.986 "copy": true, 00:10:07.986 "nvme_iov_md": false 00:10:07.986 }, 00:10:07.986 "memory_domains": [ 00:10:07.986 { 00:10:07.986 "dma_device_id": "system", 00:10:07.986 "dma_device_type": 1 00:10:07.986 }, 00:10:07.986 { 00:10:07.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.986 "dma_device_type": 2 00:10:07.986 } 00:10:07.986 ], 00:10:07.986 "driver_specific": {} 00:10:07.986 } 00:10:07.986 ] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.986 BaseBdev4 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.986 [ 00:10:07.986 { 00:10:07.986 "name": "BaseBdev4", 00:10:07.986 "aliases": [ 00:10:07.986 "dae91835-9100-42eb-b43d-3a3eae6fa83e" 00:10:07.986 ], 00:10:07.986 "product_name": "Malloc disk", 00:10:07.986 "block_size": 512, 00:10:07.986 "num_blocks": 65536, 00:10:07.986 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:07.986 "assigned_rate_limits": { 00:10:07.986 "rw_ios_per_sec": 0, 00:10:07.986 "rw_mbytes_per_sec": 0, 00:10:07.986 "r_mbytes_per_sec": 0, 00:10:07.986 "w_mbytes_per_sec": 0 00:10:07.986 }, 00:10:07.986 "claimed": false, 00:10:07.986 "zoned": false, 00:10:07.986 "supported_io_types": { 00:10:07.986 "read": true, 00:10:07.986 "write": true, 00:10:07.986 "unmap": true, 00:10:07.986 "flush": true, 00:10:07.986 "reset": true, 00:10:07.986 "nvme_admin": false, 00:10:07.986 "nvme_io": false, 00:10:07.986 "nvme_io_md": false, 00:10:07.986 "write_zeroes": true, 00:10:07.986 "zcopy": true, 00:10:07.986 "get_zone_info": false, 00:10:07.986 "zone_management": false, 00:10:07.986 "zone_append": false, 00:10:07.986 "compare": false, 00:10:07.986 "compare_and_write": false, 00:10:07.986 "abort": true, 00:10:07.986 "seek_hole": false, 00:10:07.986 "seek_data": false, 00:10:07.986 "copy": true, 00:10:07.986 "nvme_iov_md": false 00:10:07.986 }, 00:10:07.986 "memory_domains": [ 00:10:07.986 { 00:10:07.986 "dma_device_id": "system", 00:10:07.986 "dma_device_type": 1 00:10:07.986 }, 00:10:07.986 { 00:10:07.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.986 "dma_device_type": 2 00:10:07.986 } 00:10:07.986 ], 00:10:07.986 "driver_specific": {} 00:10:07.986 } 00:10:07.986 ] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.986 [2024-12-07 01:54:13.259231] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.986 [2024-12-07 01:54:13.259317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.986 [2024-12-07 01:54:13.259359] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:07.986 [2024-12-07 01:54:13.261247] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:07.986 [2024-12-07 01:54:13.261328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.986 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.986 "name": "Existed_Raid", 00:10:07.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.986 "strip_size_kb": 0, 00:10:07.986 "state": "configuring", 00:10:07.986 "raid_level": "raid1", 00:10:07.986 "superblock": false, 00:10:07.986 "num_base_bdevs": 4, 00:10:07.986 "num_base_bdevs_discovered": 3, 00:10:07.986 "num_base_bdevs_operational": 4, 00:10:07.986 "base_bdevs_list": [ 00:10:07.986 { 00:10:07.986 "name": "BaseBdev1", 00:10:07.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.986 "is_configured": false, 00:10:07.986 "data_offset": 0, 00:10:07.986 "data_size": 0 00:10:07.987 }, 00:10:07.987 { 00:10:07.987 "name": "BaseBdev2", 00:10:07.987 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:07.987 "is_configured": true, 00:10:07.987 "data_offset": 0, 00:10:07.987 "data_size": 65536 00:10:07.987 }, 00:10:07.987 { 00:10:07.987 "name": "BaseBdev3", 00:10:07.987 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:07.987 "is_configured": true, 00:10:07.987 "data_offset": 0, 00:10:07.987 "data_size": 65536 00:10:07.987 }, 00:10:07.987 { 00:10:07.987 "name": "BaseBdev4", 00:10:07.987 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:07.987 "is_configured": true, 00:10:07.987 "data_offset": 0, 00:10:07.987 "data_size": 65536 00:10:07.987 } 00:10:07.987 ] 00:10:07.987 }' 00:10:07.987 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.987 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.245 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:08.245 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.246 [2024-12-07 01:54:13.686605] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.246 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.505 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.505 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.505 "name": "Existed_Raid", 00:10:08.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.505 "strip_size_kb": 0, 00:10:08.505 "state": "configuring", 00:10:08.505 "raid_level": "raid1", 00:10:08.505 "superblock": false, 00:10:08.505 "num_base_bdevs": 4, 00:10:08.505 "num_base_bdevs_discovered": 2, 00:10:08.505 "num_base_bdevs_operational": 4, 00:10:08.505 "base_bdevs_list": [ 00:10:08.505 { 00:10:08.505 "name": "BaseBdev1", 00:10:08.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.505 "is_configured": false, 00:10:08.505 "data_offset": 0, 00:10:08.505 "data_size": 0 00:10:08.505 }, 00:10:08.505 { 00:10:08.505 "name": null, 00:10:08.505 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:08.505 "is_configured": false, 00:10:08.505 "data_offset": 0, 00:10:08.505 "data_size": 65536 00:10:08.505 }, 00:10:08.505 { 00:10:08.505 "name": "BaseBdev3", 00:10:08.505 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:08.505 "is_configured": true, 00:10:08.505 "data_offset": 0, 00:10:08.505 "data_size": 65536 00:10:08.505 }, 00:10:08.505 { 00:10:08.505 "name": "BaseBdev4", 00:10:08.505 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:08.505 "is_configured": true, 00:10:08.505 "data_offset": 0, 00:10:08.505 "data_size": 65536 00:10:08.505 } 00:10:08.505 ] 00:10:08.505 }' 00:10:08.505 01:54:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.505 01:54:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.764 [2024-12-07 01:54:14.188535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.764 BaseBdev1 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.764 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.764 [ 00:10:08.764 { 00:10:08.764 "name": "BaseBdev1", 00:10:08.764 "aliases": [ 00:10:08.764 "62c900ab-86cd-49c7-8c8c-8ca7f019b644" 00:10:08.764 ], 00:10:08.764 "product_name": "Malloc disk", 00:10:08.764 "block_size": 512, 00:10:08.764 "num_blocks": 65536, 00:10:08.764 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:08.764 "assigned_rate_limits": { 00:10:08.764 "rw_ios_per_sec": 0, 00:10:08.764 "rw_mbytes_per_sec": 0, 00:10:08.764 "r_mbytes_per_sec": 0, 00:10:08.764 "w_mbytes_per_sec": 0 00:10:08.764 }, 00:10:08.764 "claimed": true, 00:10:08.764 "claim_type": "exclusive_write", 00:10:08.764 "zoned": false, 00:10:08.764 "supported_io_types": { 00:10:08.764 "read": true, 00:10:08.764 "write": true, 00:10:08.764 "unmap": true, 00:10:08.764 "flush": true, 00:10:08.764 "reset": true, 00:10:08.764 "nvme_admin": false, 00:10:08.764 "nvme_io": false, 00:10:08.764 "nvme_io_md": false, 00:10:08.764 "write_zeroes": true, 00:10:08.764 "zcopy": true, 00:10:08.764 "get_zone_info": false, 00:10:08.764 "zone_management": false, 00:10:08.764 "zone_append": false, 00:10:08.764 "compare": false, 00:10:08.764 "compare_and_write": false, 00:10:08.764 "abort": true, 00:10:08.764 "seek_hole": false, 00:10:08.764 "seek_data": false, 00:10:08.764 "copy": true, 00:10:08.764 "nvme_iov_md": false 00:10:08.764 }, 00:10:08.764 "memory_domains": [ 00:10:09.023 { 00:10:09.023 "dma_device_id": "system", 00:10:09.023 "dma_device_type": 1 00:10:09.023 }, 00:10:09.023 { 00:10:09.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.023 "dma_device_type": 2 00:10:09.023 } 00:10:09.023 ], 00:10:09.023 "driver_specific": {} 00:10:09.023 } 00:10:09.023 ] 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.023 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.024 "name": "Existed_Raid", 00:10:09.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.024 "strip_size_kb": 0, 00:10:09.024 "state": "configuring", 00:10:09.024 "raid_level": "raid1", 00:10:09.024 "superblock": false, 00:10:09.024 "num_base_bdevs": 4, 00:10:09.024 "num_base_bdevs_discovered": 3, 00:10:09.024 "num_base_bdevs_operational": 4, 00:10:09.024 "base_bdevs_list": [ 00:10:09.024 { 00:10:09.024 "name": "BaseBdev1", 00:10:09.024 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:09.024 "is_configured": true, 00:10:09.024 "data_offset": 0, 00:10:09.024 "data_size": 65536 00:10:09.024 }, 00:10:09.024 { 00:10:09.024 "name": null, 00:10:09.024 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:09.024 "is_configured": false, 00:10:09.024 "data_offset": 0, 00:10:09.024 "data_size": 65536 00:10:09.024 }, 00:10:09.024 { 00:10:09.024 "name": "BaseBdev3", 00:10:09.024 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:09.024 "is_configured": true, 00:10:09.024 "data_offset": 0, 00:10:09.024 "data_size": 65536 00:10:09.024 }, 00:10:09.024 { 00:10:09.024 "name": "BaseBdev4", 00:10:09.024 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:09.024 "is_configured": true, 00:10:09.024 "data_offset": 0, 00:10:09.024 "data_size": 65536 00:10:09.024 } 00:10:09.024 ] 00:10:09.024 }' 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.024 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.284 [2024-12-07 01:54:14.703779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.284 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.543 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.543 "name": "Existed_Raid", 00:10:09.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.543 "strip_size_kb": 0, 00:10:09.543 "state": "configuring", 00:10:09.544 "raid_level": "raid1", 00:10:09.544 "superblock": false, 00:10:09.544 "num_base_bdevs": 4, 00:10:09.544 "num_base_bdevs_discovered": 2, 00:10:09.544 "num_base_bdevs_operational": 4, 00:10:09.544 "base_bdevs_list": [ 00:10:09.544 { 00:10:09.544 "name": "BaseBdev1", 00:10:09.544 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:09.544 "is_configured": true, 00:10:09.544 "data_offset": 0, 00:10:09.544 "data_size": 65536 00:10:09.544 }, 00:10:09.544 { 00:10:09.544 "name": null, 00:10:09.544 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:09.544 "is_configured": false, 00:10:09.544 "data_offset": 0, 00:10:09.544 "data_size": 65536 00:10:09.544 }, 00:10:09.544 { 00:10:09.544 "name": null, 00:10:09.544 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:09.544 "is_configured": false, 00:10:09.544 "data_offset": 0, 00:10:09.544 "data_size": 65536 00:10:09.544 }, 00:10:09.544 { 00:10:09.544 "name": "BaseBdev4", 00:10:09.544 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:09.544 "is_configured": true, 00:10:09.544 "data_offset": 0, 00:10:09.544 "data_size": 65536 00:10:09.544 } 00:10:09.544 ] 00:10:09.544 }' 00:10:09.544 01:54:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.544 01:54:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.802 [2024-12-07 01:54:15.131193] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.802 "name": "Existed_Raid", 00:10:09.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.802 "strip_size_kb": 0, 00:10:09.802 "state": "configuring", 00:10:09.802 "raid_level": "raid1", 00:10:09.802 "superblock": false, 00:10:09.802 "num_base_bdevs": 4, 00:10:09.802 "num_base_bdevs_discovered": 3, 00:10:09.802 "num_base_bdevs_operational": 4, 00:10:09.802 "base_bdevs_list": [ 00:10:09.802 { 00:10:09.802 "name": "BaseBdev1", 00:10:09.802 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:09.802 "is_configured": true, 00:10:09.802 "data_offset": 0, 00:10:09.802 "data_size": 65536 00:10:09.802 }, 00:10:09.802 { 00:10:09.802 "name": null, 00:10:09.802 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:09.802 "is_configured": false, 00:10:09.802 "data_offset": 0, 00:10:09.802 "data_size": 65536 00:10:09.802 }, 00:10:09.802 { 00:10:09.802 "name": "BaseBdev3", 00:10:09.802 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:09.802 "is_configured": true, 00:10:09.802 "data_offset": 0, 00:10:09.802 "data_size": 65536 00:10:09.802 }, 00:10:09.802 { 00:10:09.802 "name": "BaseBdev4", 00:10:09.802 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:09.802 "is_configured": true, 00:10:09.802 "data_offset": 0, 00:10:09.802 "data_size": 65536 00:10:09.802 } 00:10:09.802 ] 00:10:09.802 }' 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.802 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.371 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:10.371 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.371 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.371 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.371 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.371 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.372 [2024-12-07 01:54:15.558416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.372 "name": "Existed_Raid", 00:10:10.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.372 "strip_size_kb": 0, 00:10:10.372 "state": "configuring", 00:10:10.372 "raid_level": "raid1", 00:10:10.372 "superblock": false, 00:10:10.372 "num_base_bdevs": 4, 00:10:10.372 "num_base_bdevs_discovered": 2, 00:10:10.372 "num_base_bdevs_operational": 4, 00:10:10.372 "base_bdevs_list": [ 00:10:10.372 { 00:10:10.372 "name": null, 00:10:10.372 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:10.372 "is_configured": false, 00:10:10.372 "data_offset": 0, 00:10:10.372 "data_size": 65536 00:10:10.372 }, 00:10:10.372 { 00:10:10.372 "name": null, 00:10:10.372 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:10.372 "is_configured": false, 00:10:10.372 "data_offset": 0, 00:10:10.372 "data_size": 65536 00:10:10.372 }, 00:10:10.372 { 00:10:10.372 "name": "BaseBdev3", 00:10:10.372 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:10.372 "is_configured": true, 00:10:10.372 "data_offset": 0, 00:10:10.372 "data_size": 65536 00:10:10.372 }, 00:10:10.372 { 00:10:10.372 "name": "BaseBdev4", 00:10:10.372 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:10.372 "is_configured": true, 00:10:10.372 "data_offset": 0, 00:10:10.372 "data_size": 65536 00:10:10.372 } 00:10:10.372 ] 00:10:10.372 }' 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.372 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.632 01:54:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.632 [2024-12-07 01:54:16.008173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.632 "name": "Existed_Raid", 00:10:10.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.632 "strip_size_kb": 0, 00:10:10.632 "state": "configuring", 00:10:10.632 "raid_level": "raid1", 00:10:10.632 "superblock": false, 00:10:10.632 "num_base_bdevs": 4, 00:10:10.632 "num_base_bdevs_discovered": 3, 00:10:10.632 "num_base_bdevs_operational": 4, 00:10:10.632 "base_bdevs_list": [ 00:10:10.632 { 00:10:10.632 "name": null, 00:10:10.632 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:10.632 "is_configured": false, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 65536 00:10:10.632 }, 00:10:10.632 { 00:10:10.632 "name": "BaseBdev2", 00:10:10.632 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:10.632 "is_configured": true, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 65536 00:10:10.632 }, 00:10:10.632 { 00:10:10.632 "name": "BaseBdev3", 00:10:10.632 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:10.632 "is_configured": true, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 65536 00:10:10.632 }, 00:10:10.632 { 00:10:10.632 "name": "BaseBdev4", 00:10:10.632 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:10.632 "is_configured": true, 00:10:10.632 "data_offset": 0, 00:10:10.632 "data_size": 65536 00:10:10.632 } 00:10:10.632 ] 00:10:10.632 }' 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.632 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 62c900ab-86cd-49c7-8c8c-8ca7f019b644 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 [2024-12-07 01:54:16.570119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:11.203 [2024-12-07 01:54:16.570243] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:11.203 [2024-12-07 01:54:16.570272] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:11.203 [2024-12-07 01:54:16.570586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:11.203 NewBaseBdev 00:10:11.203 [2024-12-07 01:54:16.570768] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:11.203 [2024-12-07 01:54:16.570782] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:11.203 [2024-12-07 01:54:16.570965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 [ 00:10:11.203 { 00:10:11.203 "name": "NewBaseBdev", 00:10:11.203 "aliases": [ 00:10:11.203 "62c900ab-86cd-49c7-8c8c-8ca7f019b644" 00:10:11.203 ], 00:10:11.203 "product_name": "Malloc disk", 00:10:11.203 "block_size": 512, 00:10:11.203 "num_blocks": 65536, 00:10:11.203 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:11.203 "assigned_rate_limits": { 00:10:11.203 "rw_ios_per_sec": 0, 00:10:11.203 "rw_mbytes_per_sec": 0, 00:10:11.203 "r_mbytes_per_sec": 0, 00:10:11.203 "w_mbytes_per_sec": 0 00:10:11.203 }, 00:10:11.203 "claimed": true, 00:10:11.203 "claim_type": "exclusive_write", 00:10:11.203 "zoned": false, 00:10:11.203 "supported_io_types": { 00:10:11.203 "read": true, 00:10:11.203 "write": true, 00:10:11.203 "unmap": true, 00:10:11.203 "flush": true, 00:10:11.203 "reset": true, 00:10:11.203 "nvme_admin": false, 00:10:11.203 "nvme_io": false, 00:10:11.203 "nvme_io_md": false, 00:10:11.203 "write_zeroes": true, 00:10:11.203 "zcopy": true, 00:10:11.203 "get_zone_info": false, 00:10:11.203 "zone_management": false, 00:10:11.203 "zone_append": false, 00:10:11.203 "compare": false, 00:10:11.203 "compare_and_write": false, 00:10:11.203 "abort": true, 00:10:11.203 "seek_hole": false, 00:10:11.203 "seek_data": false, 00:10:11.203 "copy": true, 00:10:11.203 "nvme_iov_md": false 00:10:11.203 }, 00:10:11.203 "memory_domains": [ 00:10:11.203 { 00:10:11.203 "dma_device_id": "system", 00:10:11.203 "dma_device_type": 1 00:10:11.203 }, 00:10:11.203 { 00:10:11.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.203 "dma_device_type": 2 00:10:11.203 } 00:10:11.203 ], 00:10:11.203 "driver_specific": {} 00:10:11.203 } 00:10:11.203 ] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.203 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.463 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.463 "name": "Existed_Raid", 00:10:11.463 "uuid": "2a109de9-e603-4959-91d0-9f16ec932825", 00:10:11.463 "strip_size_kb": 0, 00:10:11.463 "state": "online", 00:10:11.463 "raid_level": "raid1", 00:10:11.463 "superblock": false, 00:10:11.463 "num_base_bdevs": 4, 00:10:11.463 "num_base_bdevs_discovered": 4, 00:10:11.463 "num_base_bdevs_operational": 4, 00:10:11.463 "base_bdevs_list": [ 00:10:11.463 { 00:10:11.463 "name": "NewBaseBdev", 00:10:11.463 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:11.463 "is_configured": true, 00:10:11.463 "data_offset": 0, 00:10:11.463 "data_size": 65536 00:10:11.463 }, 00:10:11.463 { 00:10:11.463 "name": "BaseBdev2", 00:10:11.463 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:11.463 "is_configured": true, 00:10:11.463 "data_offset": 0, 00:10:11.463 "data_size": 65536 00:10:11.464 }, 00:10:11.464 { 00:10:11.464 "name": "BaseBdev3", 00:10:11.464 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:11.464 "is_configured": true, 00:10:11.464 "data_offset": 0, 00:10:11.464 "data_size": 65536 00:10:11.464 }, 00:10:11.464 { 00:10:11.464 "name": "BaseBdev4", 00:10:11.464 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:11.464 "is_configured": true, 00:10:11.464 "data_offset": 0, 00:10:11.464 "data_size": 65536 00:10:11.464 } 00:10:11.464 ] 00:10:11.464 }' 00:10:11.464 01:54:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.464 01:54:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.724 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.724 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.724 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.724 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.725 [2024-12-07 01:54:17.061694] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.725 "name": "Existed_Raid", 00:10:11.725 "aliases": [ 00:10:11.725 "2a109de9-e603-4959-91d0-9f16ec932825" 00:10:11.725 ], 00:10:11.725 "product_name": "Raid Volume", 00:10:11.725 "block_size": 512, 00:10:11.725 "num_blocks": 65536, 00:10:11.725 "uuid": "2a109de9-e603-4959-91d0-9f16ec932825", 00:10:11.725 "assigned_rate_limits": { 00:10:11.725 "rw_ios_per_sec": 0, 00:10:11.725 "rw_mbytes_per_sec": 0, 00:10:11.725 "r_mbytes_per_sec": 0, 00:10:11.725 "w_mbytes_per_sec": 0 00:10:11.725 }, 00:10:11.725 "claimed": false, 00:10:11.725 "zoned": false, 00:10:11.725 "supported_io_types": { 00:10:11.725 "read": true, 00:10:11.725 "write": true, 00:10:11.725 "unmap": false, 00:10:11.725 "flush": false, 00:10:11.725 "reset": true, 00:10:11.725 "nvme_admin": false, 00:10:11.725 "nvme_io": false, 00:10:11.725 "nvme_io_md": false, 00:10:11.725 "write_zeroes": true, 00:10:11.725 "zcopy": false, 00:10:11.725 "get_zone_info": false, 00:10:11.725 "zone_management": false, 00:10:11.725 "zone_append": false, 00:10:11.725 "compare": false, 00:10:11.725 "compare_and_write": false, 00:10:11.725 "abort": false, 00:10:11.725 "seek_hole": false, 00:10:11.725 "seek_data": false, 00:10:11.725 "copy": false, 00:10:11.725 "nvme_iov_md": false 00:10:11.725 }, 00:10:11.725 "memory_domains": [ 00:10:11.725 { 00:10:11.725 "dma_device_id": "system", 00:10:11.725 "dma_device_type": 1 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.725 "dma_device_type": 2 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "system", 00:10:11.725 "dma_device_type": 1 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.725 "dma_device_type": 2 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "system", 00:10:11.725 "dma_device_type": 1 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.725 "dma_device_type": 2 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "system", 00:10:11.725 "dma_device_type": 1 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.725 "dma_device_type": 2 00:10:11.725 } 00:10:11.725 ], 00:10:11.725 "driver_specific": { 00:10:11.725 "raid": { 00:10:11.725 "uuid": "2a109de9-e603-4959-91d0-9f16ec932825", 00:10:11.725 "strip_size_kb": 0, 00:10:11.725 "state": "online", 00:10:11.725 "raid_level": "raid1", 00:10:11.725 "superblock": false, 00:10:11.725 "num_base_bdevs": 4, 00:10:11.725 "num_base_bdevs_discovered": 4, 00:10:11.725 "num_base_bdevs_operational": 4, 00:10:11.725 "base_bdevs_list": [ 00:10:11.725 { 00:10:11.725 "name": "NewBaseBdev", 00:10:11.725 "uuid": "62c900ab-86cd-49c7-8c8c-8ca7f019b644", 00:10:11.725 "is_configured": true, 00:10:11.725 "data_offset": 0, 00:10:11.725 "data_size": 65536 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "name": "BaseBdev2", 00:10:11.725 "uuid": "c98484ee-e95a-4770-86c8-f0808e9c5383", 00:10:11.725 "is_configured": true, 00:10:11.725 "data_offset": 0, 00:10:11.725 "data_size": 65536 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "name": "BaseBdev3", 00:10:11.725 "uuid": "160dbd11-dffe-4ec5-a81c-9e7c07d01268", 00:10:11.725 "is_configured": true, 00:10:11.725 "data_offset": 0, 00:10:11.725 "data_size": 65536 00:10:11.725 }, 00:10:11.725 { 00:10:11.725 "name": "BaseBdev4", 00:10:11.725 "uuid": "dae91835-9100-42eb-b43d-3a3eae6fa83e", 00:10:11.725 "is_configured": true, 00:10:11.725 "data_offset": 0, 00:10:11.725 "data_size": 65536 00:10:11.725 } 00:10:11.725 ] 00:10:11.725 } 00:10:11.725 } 00:10:11.725 }' 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.725 BaseBdev2 00:10:11.725 BaseBdev3 00:10:11.725 BaseBdev4' 00:10:11.725 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.985 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 [2024-12-07 01:54:17.396763] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.986 [2024-12-07 01:54:17.396827] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.986 [2024-12-07 01:54:17.396940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.986 [2024-12-07 01:54:17.397233] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.986 [2024-12-07 01:54:17.397285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83698 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83698 ']' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83698 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83698 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83698' 00:10:11.986 killing process with pid 83698 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 83698 00:10:11.986 [2024-12-07 01:54:17.438453] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.986 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 83698 00:10:12.245 [2024-12-07 01:54:17.479400] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:12.505 00:10:12.505 real 0m9.366s 00:10:12.505 user 0m16.038s 00:10:12.505 sys 0m1.879s 00:10:12.505 ************************************ 00:10:12.505 END TEST raid_state_function_test 00:10:12.505 ************************************ 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.505 01:54:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:12.505 01:54:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:12.505 01:54:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.505 01:54:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:12.505 ************************************ 00:10:12.505 START TEST raid_state_function_test_sb 00:10:12.505 ************************************ 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:12.505 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84347 00:10:12.506 Process raid pid: 84347 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84347' 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84347 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 84347 ']' 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.506 01:54:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.506 [2024-12-07 01:54:17.884994] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:12.506 [2024-12-07 01:54:17.885186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.765 [2024-12-07 01:54:18.030208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.766 [2024-12-07 01:54:18.074073] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.766 [2024-12-07 01:54:18.115260] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.766 [2024-12-07 01:54:18.115382] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.336 [2024-12-07 01:54:18.724183] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.336 [2024-12-07 01:54:18.724279] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.336 [2024-12-07 01:54:18.724316] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.336 [2024-12-07 01:54:18.724356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.336 [2024-12-07 01:54:18.724374] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.336 [2024-12-07 01:54:18.724494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.336 [2024-12-07 01:54:18.724520] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.336 [2024-12-07 01:54:18.724542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.336 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.337 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.337 "name": "Existed_Raid", 00:10:13.337 "uuid": "3360df54-4cc1-4c23-bb07-f820b23425bc", 00:10:13.337 "strip_size_kb": 0, 00:10:13.337 "state": "configuring", 00:10:13.337 "raid_level": "raid1", 00:10:13.337 "superblock": true, 00:10:13.337 "num_base_bdevs": 4, 00:10:13.337 "num_base_bdevs_discovered": 0, 00:10:13.337 "num_base_bdevs_operational": 4, 00:10:13.337 "base_bdevs_list": [ 00:10:13.337 { 00:10:13.337 "name": "BaseBdev1", 00:10:13.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.337 "is_configured": false, 00:10:13.337 "data_offset": 0, 00:10:13.337 "data_size": 0 00:10:13.337 }, 00:10:13.337 { 00:10:13.337 "name": "BaseBdev2", 00:10:13.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.337 "is_configured": false, 00:10:13.337 "data_offset": 0, 00:10:13.337 "data_size": 0 00:10:13.337 }, 00:10:13.337 { 00:10:13.337 "name": "BaseBdev3", 00:10:13.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.337 "is_configured": false, 00:10:13.337 "data_offset": 0, 00:10:13.337 "data_size": 0 00:10:13.337 }, 00:10:13.337 { 00:10:13.337 "name": "BaseBdev4", 00:10:13.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.337 "is_configured": false, 00:10:13.337 "data_offset": 0, 00:10:13.337 "data_size": 0 00:10:13.337 } 00:10:13.337 ] 00:10:13.337 }' 00:10:13.337 01:54:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.337 01:54:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 [2024-12-07 01:54:19.175298] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.945 [2024-12-07 01:54:19.175395] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 [2024-12-07 01:54:19.187285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:13.945 [2024-12-07 01:54:19.187360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:13.945 [2024-12-07 01:54:19.187403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.945 [2024-12-07 01:54:19.187426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.945 [2024-12-07 01:54:19.187445] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.945 [2024-12-07 01:54:19.187465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.945 [2024-12-07 01:54:19.187484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:13.945 [2024-12-07 01:54:19.187505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 [2024-12-07 01:54:19.207794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.945 BaseBdev1 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 [ 00:10:13.945 { 00:10:13.945 "name": "BaseBdev1", 00:10:13.945 "aliases": [ 00:10:13.945 "62bba8bd-9234-40da-80c6-755c15c54e18" 00:10:13.945 ], 00:10:13.945 "product_name": "Malloc disk", 00:10:13.945 "block_size": 512, 00:10:13.945 "num_blocks": 65536, 00:10:13.945 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:13.945 "assigned_rate_limits": { 00:10:13.945 "rw_ios_per_sec": 0, 00:10:13.945 "rw_mbytes_per_sec": 0, 00:10:13.945 "r_mbytes_per_sec": 0, 00:10:13.945 "w_mbytes_per_sec": 0 00:10:13.945 }, 00:10:13.945 "claimed": true, 00:10:13.945 "claim_type": "exclusive_write", 00:10:13.945 "zoned": false, 00:10:13.945 "supported_io_types": { 00:10:13.945 "read": true, 00:10:13.945 "write": true, 00:10:13.945 "unmap": true, 00:10:13.945 "flush": true, 00:10:13.945 "reset": true, 00:10:13.945 "nvme_admin": false, 00:10:13.945 "nvme_io": false, 00:10:13.945 "nvme_io_md": false, 00:10:13.945 "write_zeroes": true, 00:10:13.945 "zcopy": true, 00:10:13.945 "get_zone_info": false, 00:10:13.945 "zone_management": false, 00:10:13.945 "zone_append": false, 00:10:13.945 "compare": false, 00:10:13.945 "compare_and_write": false, 00:10:13.945 "abort": true, 00:10:13.945 "seek_hole": false, 00:10:13.945 "seek_data": false, 00:10:13.945 "copy": true, 00:10:13.945 "nvme_iov_md": false 00:10:13.945 }, 00:10:13.945 "memory_domains": [ 00:10:13.945 { 00:10:13.945 "dma_device_id": "system", 00:10:13.945 "dma_device_type": 1 00:10:13.945 }, 00:10:13.945 { 00:10:13.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.945 "dma_device_type": 2 00:10:13.945 } 00:10:13.945 ], 00:10:13.945 "driver_specific": {} 00:10:13.945 } 00:10:13.945 ] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.945 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.945 "name": "Existed_Raid", 00:10:13.945 "uuid": "88b1a7e4-e181-45ea-9c78-e7981a76a729", 00:10:13.945 "strip_size_kb": 0, 00:10:13.945 "state": "configuring", 00:10:13.945 "raid_level": "raid1", 00:10:13.945 "superblock": true, 00:10:13.945 "num_base_bdevs": 4, 00:10:13.945 "num_base_bdevs_discovered": 1, 00:10:13.945 "num_base_bdevs_operational": 4, 00:10:13.945 "base_bdevs_list": [ 00:10:13.945 { 00:10:13.945 "name": "BaseBdev1", 00:10:13.945 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:13.945 "is_configured": true, 00:10:13.945 "data_offset": 2048, 00:10:13.945 "data_size": 63488 00:10:13.945 }, 00:10:13.945 { 00:10:13.945 "name": "BaseBdev2", 00:10:13.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.945 "is_configured": false, 00:10:13.945 "data_offset": 0, 00:10:13.945 "data_size": 0 00:10:13.945 }, 00:10:13.945 { 00:10:13.945 "name": "BaseBdev3", 00:10:13.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.945 "is_configured": false, 00:10:13.945 "data_offset": 0, 00:10:13.945 "data_size": 0 00:10:13.945 }, 00:10:13.946 { 00:10:13.946 "name": "BaseBdev4", 00:10:13.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.946 "is_configured": false, 00:10:13.946 "data_offset": 0, 00:10:13.946 "data_size": 0 00:10:13.946 } 00:10:13.946 ] 00:10:13.946 }' 00:10:13.946 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.946 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.512 [2024-12-07 01:54:19.699010] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.512 [2024-12-07 01:54:19.699119] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.512 [2024-12-07 01:54:19.707042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.512 [2024-12-07 01:54:19.708992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.512 [2024-12-07 01:54:19.709088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.512 [2024-12-07 01:54:19.709115] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:14.512 [2024-12-07 01:54:19.709136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:14.512 [2024-12-07 01:54:19.709154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:14.512 [2024-12-07 01:54:19.709173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.512 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.513 "name": "Existed_Raid", 00:10:14.513 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:14.513 "strip_size_kb": 0, 00:10:14.513 "state": "configuring", 00:10:14.513 "raid_level": "raid1", 00:10:14.513 "superblock": true, 00:10:14.513 "num_base_bdevs": 4, 00:10:14.513 "num_base_bdevs_discovered": 1, 00:10:14.513 "num_base_bdevs_operational": 4, 00:10:14.513 "base_bdevs_list": [ 00:10:14.513 { 00:10:14.513 "name": "BaseBdev1", 00:10:14.513 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:14.513 "is_configured": true, 00:10:14.513 "data_offset": 2048, 00:10:14.513 "data_size": 63488 00:10:14.513 }, 00:10:14.513 { 00:10:14.513 "name": "BaseBdev2", 00:10:14.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.513 "is_configured": false, 00:10:14.513 "data_offset": 0, 00:10:14.513 "data_size": 0 00:10:14.513 }, 00:10:14.513 { 00:10:14.513 "name": "BaseBdev3", 00:10:14.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.513 "is_configured": false, 00:10:14.513 "data_offset": 0, 00:10:14.513 "data_size": 0 00:10:14.513 }, 00:10:14.513 { 00:10:14.513 "name": "BaseBdev4", 00:10:14.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.513 "is_configured": false, 00:10:14.513 "data_offset": 0, 00:10:14.513 "data_size": 0 00:10:14.513 } 00:10:14.513 ] 00:10:14.513 }' 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.513 01:54:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.772 [2024-12-07 01:54:20.173160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.772 BaseBdev2 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.772 [ 00:10:14.772 { 00:10:14.772 "name": "BaseBdev2", 00:10:14.772 "aliases": [ 00:10:14.772 "c8b4249b-2d0b-4654-a122-ae73022af5a4" 00:10:14.772 ], 00:10:14.772 "product_name": "Malloc disk", 00:10:14.772 "block_size": 512, 00:10:14.772 "num_blocks": 65536, 00:10:14.772 "uuid": "c8b4249b-2d0b-4654-a122-ae73022af5a4", 00:10:14.772 "assigned_rate_limits": { 00:10:14.772 "rw_ios_per_sec": 0, 00:10:14.772 "rw_mbytes_per_sec": 0, 00:10:14.772 "r_mbytes_per_sec": 0, 00:10:14.772 "w_mbytes_per_sec": 0 00:10:14.772 }, 00:10:14.772 "claimed": true, 00:10:14.772 "claim_type": "exclusive_write", 00:10:14.772 "zoned": false, 00:10:14.772 "supported_io_types": { 00:10:14.772 "read": true, 00:10:14.772 "write": true, 00:10:14.772 "unmap": true, 00:10:14.772 "flush": true, 00:10:14.772 "reset": true, 00:10:14.772 "nvme_admin": false, 00:10:14.772 "nvme_io": false, 00:10:14.772 "nvme_io_md": false, 00:10:14.772 "write_zeroes": true, 00:10:14.772 "zcopy": true, 00:10:14.772 "get_zone_info": false, 00:10:14.772 "zone_management": false, 00:10:14.772 "zone_append": false, 00:10:14.772 "compare": false, 00:10:14.772 "compare_and_write": false, 00:10:14.772 "abort": true, 00:10:14.772 "seek_hole": false, 00:10:14.772 "seek_data": false, 00:10:14.772 "copy": true, 00:10:14.772 "nvme_iov_md": false 00:10:14.772 }, 00:10:14.772 "memory_domains": [ 00:10:14.772 { 00:10:14.772 "dma_device_id": "system", 00:10:14.772 "dma_device_type": 1 00:10:14.772 }, 00:10:14.772 { 00:10:14.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.772 "dma_device_type": 2 00:10:14.772 } 00:10:14.772 ], 00:10:14.772 "driver_specific": {} 00:10:14.772 } 00:10:14.772 ] 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.772 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.773 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.058 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.058 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.058 "name": "Existed_Raid", 00:10:15.058 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:15.058 "strip_size_kb": 0, 00:10:15.058 "state": "configuring", 00:10:15.058 "raid_level": "raid1", 00:10:15.058 "superblock": true, 00:10:15.058 "num_base_bdevs": 4, 00:10:15.058 "num_base_bdevs_discovered": 2, 00:10:15.058 "num_base_bdevs_operational": 4, 00:10:15.058 "base_bdevs_list": [ 00:10:15.058 { 00:10:15.058 "name": "BaseBdev1", 00:10:15.058 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:15.058 "is_configured": true, 00:10:15.058 "data_offset": 2048, 00:10:15.058 "data_size": 63488 00:10:15.058 }, 00:10:15.058 { 00:10:15.058 "name": "BaseBdev2", 00:10:15.058 "uuid": "c8b4249b-2d0b-4654-a122-ae73022af5a4", 00:10:15.058 "is_configured": true, 00:10:15.058 "data_offset": 2048, 00:10:15.058 "data_size": 63488 00:10:15.058 }, 00:10:15.058 { 00:10:15.058 "name": "BaseBdev3", 00:10:15.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.058 "is_configured": false, 00:10:15.058 "data_offset": 0, 00:10:15.058 "data_size": 0 00:10:15.058 }, 00:10:15.058 { 00:10:15.058 "name": "BaseBdev4", 00:10:15.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.058 "is_configured": false, 00:10:15.058 "data_offset": 0, 00:10:15.058 "data_size": 0 00:10:15.058 } 00:10:15.058 ] 00:10:15.058 }' 00:10:15.058 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.058 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.318 BaseBdev3 00:10:15.318 [2024-12-07 01:54:20.639231] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.318 [ 00:10:15.318 { 00:10:15.318 "name": "BaseBdev3", 00:10:15.318 "aliases": [ 00:10:15.318 "cde4fa83-0062-4ff7-b9b4-0c7494c7abac" 00:10:15.318 ], 00:10:15.318 "product_name": "Malloc disk", 00:10:15.318 "block_size": 512, 00:10:15.318 "num_blocks": 65536, 00:10:15.318 "uuid": "cde4fa83-0062-4ff7-b9b4-0c7494c7abac", 00:10:15.318 "assigned_rate_limits": { 00:10:15.318 "rw_ios_per_sec": 0, 00:10:15.318 "rw_mbytes_per_sec": 0, 00:10:15.318 "r_mbytes_per_sec": 0, 00:10:15.318 "w_mbytes_per_sec": 0 00:10:15.318 }, 00:10:15.318 "claimed": true, 00:10:15.318 "claim_type": "exclusive_write", 00:10:15.318 "zoned": false, 00:10:15.318 "supported_io_types": { 00:10:15.318 "read": true, 00:10:15.318 "write": true, 00:10:15.318 "unmap": true, 00:10:15.318 "flush": true, 00:10:15.318 "reset": true, 00:10:15.318 "nvme_admin": false, 00:10:15.318 "nvme_io": false, 00:10:15.318 "nvme_io_md": false, 00:10:15.318 "write_zeroes": true, 00:10:15.318 "zcopy": true, 00:10:15.318 "get_zone_info": false, 00:10:15.318 "zone_management": false, 00:10:15.318 "zone_append": false, 00:10:15.318 "compare": false, 00:10:15.318 "compare_and_write": false, 00:10:15.318 "abort": true, 00:10:15.318 "seek_hole": false, 00:10:15.318 "seek_data": false, 00:10:15.318 "copy": true, 00:10:15.318 "nvme_iov_md": false 00:10:15.318 }, 00:10:15.318 "memory_domains": [ 00:10:15.318 { 00:10:15.318 "dma_device_id": "system", 00:10:15.318 "dma_device_type": 1 00:10:15.318 }, 00:10:15.318 { 00:10:15.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.318 "dma_device_type": 2 00:10:15.318 } 00:10:15.318 ], 00:10:15.318 "driver_specific": {} 00:10:15.318 } 00:10:15.318 ] 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.318 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.319 "name": "Existed_Raid", 00:10:15.319 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:15.319 "strip_size_kb": 0, 00:10:15.319 "state": "configuring", 00:10:15.319 "raid_level": "raid1", 00:10:15.319 "superblock": true, 00:10:15.319 "num_base_bdevs": 4, 00:10:15.319 "num_base_bdevs_discovered": 3, 00:10:15.319 "num_base_bdevs_operational": 4, 00:10:15.319 "base_bdevs_list": [ 00:10:15.319 { 00:10:15.319 "name": "BaseBdev1", 00:10:15.319 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:15.319 "is_configured": true, 00:10:15.319 "data_offset": 2048, 00:10:15.319 "data_size": 63488 00:10:15.319 }, 00:10:15.319 { 00:10:15.319 "name": "BaseBdev2", 00:10:15.319 "uuid": "c8b4249b-2d0b-4654-a122-ae73022af5a4", 00:10:15.319 "is_configured": true, 00:10:15.319 "data_offset": 2048, 00:10:15.319 "data_size": 63488 00:10:15.319 }, 00:10:15.319 { 00:10:15.319 "name": "BaseBdev3", 00:10:15.319 "uuid": "cde4fa83-0062-4ff7-b9b4-0c7494c7abac", 00:10:15.319 "is_configured": true, 00:10:15.319 "data_offset": 2048, 00:10:15.319 "data_size": 63488 00:10:15.319 }, 00:10:15.319 { 00:10:15.319 "name": "BaseBdev4", 00:10:15.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.319 "is_configured": false, 00:10:15.319 "data_offset": 0, 00:10:15.319 "data_size": 0 00:10:15.319 } 00:10:15.319 ] 00:10:15.319 }' 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.319 01:54:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 [2024-12-07 01:54:21.097376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.890 [2024-12-07 01:54:21.097664] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:15.890 [2024-12-07 01:54:21.097721] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.890 BaseBdev4 00:10:15.890 [2024-12-07 01:54:21.098047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:15.890 [2024-12-07 01:54:21.098195] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:15.890 [2024-12-07 01:54:21.098251] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:15.890 [2024-12-07 01:54:21.098409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.890 [ 00:10:15.890 { 00:10:15.890 "name": "BaseBdev4", 00:10:15.890 "aliases": [ 00:10:15.890 "d5dc748b-9b7b-48bb-8482-0a5a7bb9bad4" 00:10:15.890 ], 00:10:15.890 "product_name": "Malloc disk", 00:10:15.890 "block_size": 512, 00:10:15.890 "num_blocks": 65536, 00:10:15.890 "uuid": "d5dc748b-9b7b-48bb-8482-0a5a7bb9bad4", 00:10:15.890 "assigned_rate_limits": { 00:10:15.890 "rw_ios_per_sec": 0, 00:10:15.890 "rw_mbytes_per_sec": 0, 00:10:15.890 "r_mbytes_per_sec": 0, 00:10:15.890 "w_mbytes_per_sec": 0 00:10:15.890 }, 00:10:15.890 "claimed": true, 00:10:15.890 "claim_type": "exclusive_write", 00:10:15.890 "zoned": false, 00:10:15.890 "supported_io_types": { 00:10:15.890 "read": true, 00:10:15.890 "write": true, 00:10:15.890 "unmap": true, 00:10:15.890 "flush": true, 00:10:15.890 "reset": true, 00:10:15.890 "nvme_admin": false, 00:10:15.890 "nvme_io": false, 00:10:15.890 "nvme_io_md": false, 00:10:15.890 "write_zeroes": true, 00:10:15.890 "zcopy": true, 00:10:15.890 "get_zone_info": false, 00:10:15.890 "zone_management": false, 00:10:15.890 "zone_append": false, 00:10:15.890 "compare": false, 00:10:15.890 "compare_and_write": false, 00:10:15.890 "abort": true, 00:10:15.890 "seek_hole": false, 00:10:15.890 "seek_data": false, 00:10:15.890 "copy": true, 00:10:15.890 "nvme_iov_md": false 00:10:15.890 }, 00:10:15.890 "memory_domains": [ 00:10:15.890 { 00:10:15.890 "dma_device_id": "system", 00:10:15.890 "dma_device_type": 1 00:10:15.890 }, 00:10:15.890 { 00:10:15.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.890 "dma_device_type": 2 00:10:15.890 } 00:10:15.890 ], 00:10:15.890 "driver_specific": {} 00:10:15.890 } 00:10:15.890 ] 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.890 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.891 "name": "Existed_Raid", 00:10:15.891 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:15.891 "strip_size_kb": 0, 00:10:15.891 "state": "online", 00:10:15.891 "raid_level": "raid1", 00:10:15.891 "superblock": true, 00:10:15.891 "num_base_bdevs": 4, 00:10:15.891 "num_base_bdevs_discovered": 4, 00:10:15.891 "num_base_bdevs_operational": 4, 00:10:15.891 "base_bdevs_list": [ 00:10:15.891 { 00:10:15.891 "name": "BaseBdev1", 00:10:15.891 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 }, 00:10:15.891 { 00:10:15.891 "name": "BaseBdev2", 00:10:15.891 "uuid": "c8b4249b-2d0b-4654-a122-ae73022af5a4", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 }, 00:10:15.891 { 00:10:15.891 "name": "BaseBdev3", 00:10:15.891 "uuid": "cde4fa83-0062-4ff7-b9b4-0c7494c7abac", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 }, 00:10:15.891 { 00:10:15.891 "name": "BaseBdev4", 00:10:15.891 "uuid": "d5dc748b-9b7b-48bb-8482-0a5a7bb9bad4", 00:10:15.891 "is_configured": true, 00:10:15.891 "data_offset": 2048, 00:10:15.891 "data_size": 63488 00:10:15.891 } 00:10:15.891 ] 00:10:15.891 }' 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.891 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.151 [2024-12-07 01:54:21.548969] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.151 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.151 "name": "Existed_Raid", 00:10:16.151 "aliases": [ 00:10:16.151 "363c8e18-cc43-402e-869e-8e77695982cc" 00:10:16.151 ], 00:10:16.151 "product_name": "Raid Volume", 00:10:16.151 "block_size": 512, 00:10:16.151 "num_blocks": 63488, 00:10:16.151 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:16.151 "assigned_rate_limits": { 00:10:16.151 "rw_ios_per_sec": 0, 00:10:16.151 "rw_mbytes_per_sec": 0, 00:10:16.151 "r_mbytes_per_sec": 0, 00:10:16.151 "w_mbytes_per_sec": 0 00:10:16.151 }, 00:10:16.151 "claimed": false, 00:10:16.151 "zoned": false, 00:10:16.151 "supported_io_types": { 00:10:16.151 "read": true, 00:10:16.151 "write": true, 00:10:16.151 "unmap": false, 00:10:16.151 "flush": false, 00:10:16.151 "reset": true, 00:10:16.151 "nvme_admin": false, 00:10:16.151 "nvme_io": false, 00:10:16.151 "nvme_io_md": false, 00:10:16.151 "write_zeroes": true, 00:10:16.151 "zcopy": false, 00:10:16.151 "get_zone_info": false, 00:10:16.151 "zone_management": false, 00:10:16.151 "zone_append": false, 00:10:16.151 "compare": false, 00:10:16.151 "compare_and_write": false, 00:10:16.151 "abort": false, 00:10:16.151 "seek_hole": false, 00:10:16.151 "seek_data": false, 00:10:16.151 "copy": false, 00:10:16.151 "nvme_iov_md": false 00:10:16.151 }, 00:10:16.151 "memory_domains": [ 00:10:16.151 { 00:10:16.151 "dma_device_id": "system", 00:10:16.151 "dma_device_type": 1 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.151 "dma_device_type": 2 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "system", 00:10:16.151 "dma_device_type": 1 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.151 "dma_device_type": 2 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "system", 00:10:16.151 "dma_device_type": 1 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.151 "dma_device_type": 2 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "system", 00:10:16.151 "dma_device_type": 1 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.151 "dma_device_type": 2 00:10:16.151 } 00:10:16.151 ], 00:10:16.151 "driver_specific": { 00:10:16.151 "raid": { 00:10:16.151 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:16.151 "strip_size_kb": 0, 00:10:16.151 "state": "online", 00:10:16.151 "raid_level": "raid1", 00:10:16.151 "superblock": true, 00:10:16.151 "num_base_bdevs": 4, 00:10:16.151 "num_base_bdevs_discovered": 4, 00:10:16.151 "num_base_bdevs_operational": 4, 00:10:16.151 "base_bdevs_list": [ 00:10:16.151 { 00:10:16.151 "name": "BaseBdev1", 00:10:16.151 "uuid": "62bba8bd-9234-40da-80c6-755c15c54e18", 00:10:16.151 "is_configured": true, 00:10:16.151 "data_offset": 2048, 00:10:16.151 "data_size": 63488 00:10:16.151 }, 00:10:16.151 { 00:10:16.151 "name": "BaseBdev2", 00:10:16.151 "uuid": "c8b4249b-2d0b-4654-a122-ae73022af5a4", 00:10:16.151 "is_configured": true, 00:10:16.152 "data_offset": 2048, 00:10:16.152 "data_size": 63488 00:10:16.152 }, 00:10:16.152 { 00:10:16.152 "name": "BaseBdev3", 00:10:16.152 "uuid": "cde4fa83-0062-4ff7-b9b4-0c7494c7abac", 00:10:16.152 "is_configured": true, 00:10:16.152 "data_offset": 2048, 00:10:16.152 "data_size": 63488 00:10:16.152 }, 00:10:16.152 { 00:10:16.152 "name": "BaseBdev4", 00:10:16.152 "uuid": "d5dc748b-9b7b-48bb-8482-0a5a7bb9bad4", 00:10:16.152 "is_configured": true, 00:10:16.152 "data_offset": 2048, 00:10:16.152 "data_size": 63488 00:10:16.152 } 00:10:16.152 ] 00:10:16.152 } 00:10:16.152 } 00:10:16.152 }' 00:10:16.152 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.152 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:16.152 BaseBdev2 00:10:16.152 BaseBdev3 00:10:16.152 BaseBdev4' 00:10:16.152 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.412 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.413 [2024-12-07 01:54:21.840160] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.413 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.672 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.672 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.672 "name": "Existed_Raid", 00:10:16.672 "uuid": "363c8e18-cc43-402e-869e-8e77695982cc", 00:10:16.672 "strip_size_kb": 0, 00:10:16.672 "state": "online", 00:10:16.672 "raid_level": "raid1", 00:10:16.672 "superblock": true, 00:10:16.672 "num_base_bdevs": 4, 00:10:16.672 "num_base_bdevs_discovered": 3, 00:10:16.672 "num_base_bdevs_operational": 3, 00:10:16.672 "base_bdevs_list": [ 00:10:16.672 { 00:10:16.672 "name": null, 00:10:16.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.672 "is_configured": false, 00:10:16.672 "data_offset": 0, 00:10:16.672 "data_size": 63488 00:10:16.672 }, 00:10:16.672 { 00:10:16.672 "name": "BaseBdev2", 00:10:16.672 "uuid": "c8b4249b-2d0b-4654-a122-ae73022af5a4", 00:10:16.672 "is_configured": true, 00:10:16.672 "data_offset": 2048, 00:10:16.672 "data_size": 63488 00:10:16.672 }, 00:10:16.672 { 00:10:16.672 "name": "BaseBdev3", 00:10:16.672 "uuid": "cde4fa83-0062-4ff7-b9b4-0c7494c7abac", 00:10:16.672 "is_configured": true, 00:10:16.672 "data_offset": 2048, 00:10:16.672 "data_size": 63488 00:10:16.672 }, 00:10:16.672 { 00:10:16.672 "name": "BaseBdev4", 00:10:16.672 "uuid": "d5dc748b-9b7b-48bb-8482-0a5a7bb9bad4", 00:10:16.672 "is_configured": true, 00:10:16.672 "data_offset": 2048, 00:10:16.672 "data_size": 63488 00:10:16.672 } 00:10:16.672 ] 00:10:16.672 }' 00:10:16.672 01:54:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.672 01:54:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.932 [2024-12-07 01:54:22.358625] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:16.932 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:16.933 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.933 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.933 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 [2024-12-07 01:54:22.417448] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 [2024-12-07 01:54:22.480562] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:17.194 [2024-12-07 01:54:22.480725] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.194 [2024-12-07 01:54:22.492132] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.194 [2024-12-07 01:54:22.492246] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.194 [2024-12-07 01:54:22.492279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 BaseBdev2 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 [ 00:10:17.194 { 00:10:17.194 "name": "BaseBdev2", 00:10:17.194 "aliases": [ 00:10:17.194 "52e87ca9-6de9-4840-a7f5-d21532d7d688" 00:10:17.194 ], 00:10:17.194 "product_name": "Malloc disk", 00:10:17.194 "block_size": 512, 00:10:17.194 "num_blocks": 65536, 00:10:17.194 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:17.194 "assigned_rate_limits": { 00:10:17.194 "rw_ios_per_sec": 0, 00:10:17.194 "rw_mbytes_per_sec": 0, 00:10:17.194 "r_mbytes_per_sec": 0, 00:10:17.194 "w_mbytes_per_sec": 0 00:10:17.194 }, 00:10:17.194 "claimed": false, 00:10:17.194 "zoned": false, 00:10:17.194 "supported_io_types": { 00:10:17.194 "read": true, 00:10:17.194 "write": true, 00:10:17.194 "unmap": true, 00:10:17.194 "flush": true, 00:10:17.194 "reset": true, 00:10:17.194 "nvme_admin": false, 00:10:17.194 "nvme_io": false, 00:10:17.194 "nvme_io_md": false, 00:10:17.194 "write_zeroes": true, 00:10:17.194 "zcopy": true, 00:10:17.194 "get_zone_info": false, 00:10:17.194 "zone_management": false, 00:10:17.194 "zone_append": false, 00:10:17.194 "compare": false, 00:10:17.194 "compare_and_write": false, 00:10:17.194 "abort": true, 00:10:17.194 "seek_hole": false, 00:10:17.194 "seek_data": false, 00:10:17.194 "copy": true, 00:10:17.194 "nvme_iov_md": false 00:10:17.194 }, 00:10:17.194 "memory_domains": [ 00:10:17.194 { 00:10:17.194 "dma_device_id": "system", 00:10:17.194 "dma_device_type": 1 00:10:17.194 }, 00:10:17.194 { 00:10:17.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.194 "dma_device_type": 2 00:10:17.194 } 00:10:17.194 ], 00:10:17.194 "driver_specific": {} 00:10:17.194 } 00:10:17.194 ] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.194 BaseBdev3 00:10:17.194 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.195 [ 00:10:17.195 { 00:10:17.195 "name": "BaseBdev3", 00:10:17.195 "aliases": [ 00:10:17.195 "320eacae-563e-480d-b808-3107cce68036" 00:10:17.195 ], 00:10:17.195 "product_name": "Malloc disk", 00:10:17.195 "block_size": 512, 00:10:17.195 "num_blocks": 65536, 00:10:17.195 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:17.195 "assigned_rate_limits": { 00:10:17.195 "rw_ios_per_sec": 0, 00:10:17.195 "rw_mbytes_per_sec": 0, 00:10:17.195 "r_mbytes_per_sec": 0, 00:10:17.195 "w_mbytes_per_sec": 0 00:10:17.195 }, 00:10:17.195 "claimed": false, 00:10:17.195 "zoned": false, 00:10:17.195 "supported_io_types": { 00:10:17.195 "read": true, 00:10:17.195 "write": true, 00:10:17.195 "unmap": true, 00:10:17.195 "flush": true, 00:10:17.195 "reset": true, 00:10:17.195 "nvme_admin": false, 00:10:17.195 "nvme_io": false, 00:10:17.195 "nvme_io_md": false, 00:10:17.195 "write_zeroes": true, 00:10:17.195 "zcopy": true, 00:10:17.195 "get_zone_info": false, 00:10:17.195 "zone_management": false, 00:10:17.195 "zone_append": false, 00:10:17.195 "compare": false, 00:10:17.195 "compare_and_write": false, 00:10:17.195 "abort": true, 00:10:17.195 "seek_hole": false, 00:10:17.195 "seek_data": false, 00:10:17.195 "copy": true, 00:10:17.195 "nvme_iov_md": false 00:10:17.195 }, 00:10:17.195 "memory_domains": [ 00:10:17.195 { 00:10:17.195 "dma_device_id": "system", 00:10:17.195 "dma_device_type": 1 00:10:17.195 }, 00:10:17.195 { 00:10:17.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.195 "dma_device_type": 2 00:10:17.195 } 00:10:17.195 ], 00:10:17.195 "driver_specific": {} 00:10:17.195 } 00:10:17.195 ] 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.195 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 BaseBdev4 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 [ 00:10:17.456 { 00:10:17.456 "name": "BaseBdev4", 00:10:17.456 "aliases": [ 00:10:17.456 "bfe42f37-ab91-4763-935c-33ec5277f435" 00:10:17.456 ], 00:10:17.456 "product_name": "Malloc disk", 00:10:17.456 "block_size": 512, 00:10:17.456 "num_blocks": 65536, 00:10:17.456 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:17.456 "assigned_rate_limits": { 00:10:17.456 "rw_ios_per_sec": 0, 00:10:17.456 "rw_mbytes_per_sec": 0, 00:10:17.456 "r_mbytes_per_sec": 0, 00:10:17.456 "w_mbytes_per_sec": 0 00:10:17.456 }, 00:10:17.456 "claimed": false, 00:10:17.456 "zoned": false, 00:10:17.456 "supported_io_types": { 00:10:17.456 "read": true, 00:10:17.456 "write": true, 00:10:17.456 "unmap": true, 00:10:17.456 "flush": true, 00:10:17.456 "reset": true, 00:10:17.456 "nvme_admin": false, 00:10:17.456 "nvme_io": false, 00:10:17.456 "nvme_io_md": false, 00:10:17.456 "write_zeroes": true, 00:10:17.456 "zcopy": true, 00:10:17.456 "get_zone_info": false, 00:10:17.456 "zone_management": false, 00:10:17.456 "zone_append": false, 00:10:17.456 "compare": false, 00:10:17.456 "compare_and_write": false, 00:10:17.456 "abort": true, 00:10:17.456 "seek_hole": false, 00:10:17.456 "seek_data": false, 00:10:17.456 "copy": true, 00:10:17.456 "nvme_iov_md": false 00:10:17.456 }, 00:10:17.456 "memory_domains": [ 00:10:17.456 { 00:10:17.456 "dma_device_id": "system", 00:10:17.456 "dma_device_type": 1 00:10:17.456 }, 00:10:17.456 { 00:10:17.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.456 "dma_device_type": 2 00:10:17.456 } 00:10:17.456 ], 00:10:17.456 "driver_specific": {} 00:10:17.456 } 00:10:17.456 ] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 [2024-12-07 01:54:22.707865] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:17.456 [2024-12-07 01:54:22.707951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:17.456 [2024-12-07 01:54:22.707991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.456 [2024-12-07 01:54:22.709818] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.456 [2024-12-07 01:54:22.709914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.456 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.456 "name": "Existed_Raid", 00:10:17.456 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:17.456 "strip_size_kb": 0, 00:10:17.456 "state": "configuring", 00:10:17.456 "raid_level": "raid1", 00:10:17.456 "superblock": true, 00:10:17.456 "num_base_bdevs": 4, 00:10:17.456 "num_base_bdevs_discovered": 3, 00:10:17.456 "num_base_bdevs_operational": 4, 00:10:17.456 "base_bdevs_list": [ 00:10:17.456 { 00:10:17.456 "name": "BaseBdev1", 00:10:17.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.456 "is_configured": false, 00:10:17.456 "data_offset": 0, 00:10:17.456 "data_size": 0 00:10:17.456 }, 00:10:17.456 { 00:10:17.456 "name": "BaseBdev2", 00:10:17.456 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:17.456 "is_configured": true, 00:10:17.456 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 }, 00:10:17.457 { 00:10:17.457 "name": "BaseBdev3", 00:10:17.457 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:17.457 "is_configured": true, 00:10:17.457 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 }, 00:10:17.457 { 00:10:17.457 "name": "BaseBdev4", 00:10:17.457 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:17.457 "is_configured": true, 00:10:17.457 "data_offset": 2048, 00:10:17.457 "data_size": 63488 00:10:17.457 } 00:10:17.457 ] 00:10:17.457 }' 00:10:17.457 01:54:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.457 01:54:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.026 [2024-12-07 01:54:23.183192] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.026 "name": "Existed_Raid", 00:10:18.026 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:18.026 "strip_size_kb": 0, 00:10:18.026 "state": "configuring", 00:10:18.026 "raid_level": "raid1", 00:10:18.026 "superblock": true, 00:10:18.026 "num_base_bdevs": 4, 00:10:18.026 "num_base_bdevs_discovered": 2, 00:10:18.026 "num_base_bdevs_operational": 4, 00:10:18.026 "base_bdevs_list": [ 00:10:18.026 { 00:10:18.026 "name": "BaseBdev1", 00:10:18.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.026 "is_configured": false, 00:10:18.026 "data_offset": 0, 00:10:18.026 "data_size": 0 00:10:18.026 }, 00:10:18.026 { 00:10:18.026 "name": null, 00:10:18.026 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:18.026 "is_configured": false, 00:10:18.026 "data_offset": 0, 00:10:18.026 "data_size": 63488 00:10:18.026 }, 00:10:18.026 { 00:10:18.026 "name": "BaseBdev3", 00:10:18.026 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:18.026 "is_configured": true, 00:10:18.026 "data_offset": 2048, 00:10:18.026 "data_size": 63488 00:10:18.026 }, 00:10:18.026 { 00:10:18.026 "name": "BaseBdev4", 00:10:18.026 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:18.026 "is_configured": true, 00:10:18.026 "data_offset": 2048, 00:10:18.026 "data_size": 63488 00:10:18.026 } 00:10:18.026 ] 00:10:18.026 }' 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.026 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 [2024-12-07 01:54:23.688996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:18.287 BaseBdev1 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 [ 00:10:18.287 { 00:10:18.287 "name": "BaseBdev1", 00:10:18.287 "aliases": [ 00:10:18.287 "caecf39e-6b48-4751-8bfb-8e8aea461f07" 00:10:18.287 ], 00:10:18.287 "product_name": "Malloc disk", 00:10:18.287 "block_size": 512, 00:10:18.287 "num_blocks": 65536, 00:10:18.287 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:18.287 "assigned_rate_limits": { 00:10:18.287 "rw_ios_per_sec": 0, 00:10:18.287 "rw_mbytes_per_sec": 0, 00:10:18.287 "r_mbytes_per_sec": 0, 00:10:18.287 "w_mbytes_per_sec": 0 00:10:18.287 }, 00:10:18.287 "claimed": true, 00:10:18.287 "claim_type": "exclusive_write", 00:10:18.287 "zoned": false, 00:10:18.287 "supported_io_types": { 00:10:18.287 "read": true, 00:10:18.287 "write": true, 00:10:18.287 "unmap": true, 00:10:18.287 "flush": true, 00:10:18.287 "reset": true, 00:10:18.287 "nvme_admin": false, 00:10:18.287 "nvme_io": false, 00:10:18.287 "nvme_io_md": false, 00:10:18.287 "write_zeroes": true, 00:10:18.287 "zcopy": true, 00:10:18.287 "get_zone_info": false, 00:10:18.287 "zone_management": false, 00:10:18.287 "zone_append": false, 00:10:18.287 "compare": false, 00:10:18.287 "compare_and_write": false, 00:10:18.287 "abort": true, 00:10:18.287 "seek_hole": false, 00:10:18.287 "seek_data": false, 00:10:18.287 "copy": true, 00:10:18.287 "nvme_iov_md": false 00:10:18.287 }, 00:10:18.287 "memory_domains": [ 00:10:18.287 { 00:10:18.287 "dma_device_id": "system", 00:10:18.287 "dma_device_type": 1 00:10:18.287 }, 00:10:18.287 { 00:10:18.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.287 "dma_device_type": 2 00:10:18.287 } 00:10:18.287 ], 00:10:18.287 "driver_specific": {} 00:10:18.287 } 00:10:18.287 ] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.287 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.547 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.547 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.547 "name": "Existed_Raid", 00:10:18.547 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:18.547 "strip_size_kb": 0, 00:10:18.547 "state": "configuring", 00:10:18.547 "raid_level": "raid1", 00:10:18.547 "superblock": true, 00:10:18.547 "num_base_bdevs": 4, 00:10:18.547 "num_base_bdevs_discovered": 3, 00:10:18.547 "num_base_bdevs_operational": 4, 00:10:18.547 "base_bdevs_list": [ 00:10:18.547 { 00:10:18.547 "name": "BaseBdev1", 00:10:18.547 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": null, 00:10:18.547 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:18.547 "is_configured": false, 00:10:18.547 "data_offset": 0, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "BaseBdev3", 00:10:18.547 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 }, 00:10:18.547 { 00:10:18.547 "name": "BaseBdev4", 00:10:18.547 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:18.547 "is_configured": true, 00:10:18.547 "data_offset": 2048, 00:10:18.547 "data_size": 63488 00:10:18.547 } 00:10:18.547 ] 00:10:18.547 }' 00:10:18.547 01:54:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.547 01:54:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 [2024-12-07 01:54:24.216147] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.807 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.066 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.066 "name": "Existed_Raid", 00:10:19.066 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:19.066 "strip_size_kb": 0, 00:10:19.066 "state": "configuring", 00:10:19.066 "raid_level": "raid1", 00:10:19.066 "superblock": true, 00:10:19.066 "num_base_bdevs": 4, 00:10:19.066 "num_base_bdevs_discovered": 2, 00:10:19.066 "num_base_bdevs_operational": 4, 00:10:19.066 "base_bdevs_list": [ 00:10:19.066 { 00:10:19.066 "name": "BaseBdev1", 00:10:19.066 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:19.066 "is_configured": true, 00:10:19.066 "data_offset": 2048, 00:10:19.066 "data_size": 63488 00:10:19.066 }, 00:10:19.066 { 00:10:19.066 "name": null, 00:10:19.066 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:19.066 "is_configured": false, 00:10:19.066 "data_offset": 0, 00:10:19.066 "data_size": 63488 00:10:19.067 }, 00:10:19.067 { 00:10:19.067 "name": null, 00:10:19.067 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:19.067 "is_configured": false, 00:10:19.067 "data_offset": 0, 00:10:19.067 "data_size": 63488 00:10:19.067 }, 00:10:19.067 { 00:10:19.067 "name": "BaseBdev4", 00:10:19.067 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:19.067 "is_configured": true, 00:10:19.067 "data_offset": 2048, 00:10:19.067 "data_size": 63488 00:10:19.067 } 00:10:19.067 ] 00:10:19.067 }' 00:10:19.067 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.067 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 [2024-12-07 01:54:24.699368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.327 "name": "Existed_Raid", 00:10:19.327 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:19.327 "strip_size_kb": 0, 00:10:19.327 "state": "configuring", 00:10:19.327 "raid_level": "raid1", 00:10:19.327 "superblock": true, 00:10:19.327 "num_base_bdevs": 4, 00:10:19.327 "num_base_bdevs_discovered": 3, 00:10:19.327 "num_base_bdevs_operational": 4, 00:10:19.327 "base_bdevs_list": [ 00:10:19.327 { 00:10:19.327 "name": "BaseBdev1", 00:10:19.327 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:19.327 "is_configured": true, 00:10:19.327 "data_offset": 2048, 00:10:19.327 "data_size": 63488 00:10:19.327 }, 00:10:19.327 { 00:10:19.327 "name": null, 00:10:19.327 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:19.327 "is_configured": false, 00:10:19.327 "data_offset": 0, 00:10:19.327 "data_size": 63488 00:10:19.327 }, 00:10:19.327 { 00:10:19.327 "name": "BaseBdev3", 00:10:19.327 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:19.327 "is_configured": true, 00:10:19.327 "data_offset": 2048, 00:10:19.327 "data_size": 63488 00:10:19.327 }, 00:10:19.327 { 00:10:19.327 "name": "BaseBdev4", 00:10:19.327 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:19.327 "is_configured": true, 00:10:19.327 "data_offset": 2048, 00:10:19.327 "data_size": 63488 00:10:19.327 } 00:10:19.327 ] 00:10:19.327 }' 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.327 01:54:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.895 [2024-12-07 01:54:25.134742] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.895 "name": "Existed_Raid", 00:10:19.895 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:19.895 "strip_size_kb": 0, 00:10:19.895 "state": "configuring", 00:10:19.895 "raid_level": "raid1", 00:10:19.895 "superblock": true, 00:10:19.895 "num_base_bdevs": 4, 00:10:19.895 "num_base_bdevs_discovered": 2, 00:10:19.895 "num_base_bdevs_operational": 4, 00:10:19.895 "base_bdevs_list": [ 00:10:19.895 { 00:10:19.895 "name": null, 00:10:19.895 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:19.895 "is_configured": false, 00:10:19.895 "data_offset": 0, 00:10:19.895 "data_size": 63488 00:10:19.895 }, 00:10:19.895 { 00:10:19.895 "name": null, 00:10:19.895 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:19.895 "is_configured": false, 00:10:19.895 "data_offset": 0, 00:10:19.895 "data_size": 63488 00:10:19.895 }, 00:10:19.895 { 00:10:19.895 "name": "BaseBdev3", 00:10:19.895 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:19.895 "is_configured": true, 00:10:19.895 "data_offset": 2048, 00:10:19.895 "data_size": 63488 00:10:19.895 }, 00:10:19.895 { 00:10:19.895 "name": "BaseBdev4", 00:10:19.895 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:19.895 "is_configured": true, 00:10:19.895 "data_offset": 2048, 00:10:19.895 "data_size": 63488 00:10:19.895 } 00:10:19.895 ] 00:10:19.895 }' 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.895 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.154 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.413 [2024-12-07 01:54:25.616397] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:20.413 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.414 "name": "Existed_Raid", 00:10:20.414 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:20.414 "strip_size_kb": 0, 00:10:20.414 "state": "configuring", 00:10:20.414 "raid_level": "raid1", 00:10:20.414 "superblock": true, 00:10:20.414 "num_base_bdevs": 4, 00:10:20.414 "num_base_bdevs_discovered": 3, 00:10:20.414 "num_base_bdevs_operational": 4, 00:10:20.414 "base_bdevs_list": [ 00:10:20.414 { 00:10:20.414 "name": null, 00:10:20.414 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:20.414 "is_configured": false, 00:10:20.414 "data_offset": 0, 00:10:20.414 "data_size": 63488 00:10:20.414 }, 00:10:20.414 { 00:10:20.414 "name": "BaseBdev2", 00:10:20.414 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:20.414 "is_configured": true, 00:10:20.414 "data_offset": 2048, 00:10:20.414 "data_size": 63488 00:10:20.414 }, 00:10:20.414 { 00:10:20.414 "name": "BaseBdev3", 00:10:20.414 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:20.414 "is_configured": true, 00:10:20.414 "data_offset": 2048, 00:10:20.414 "data_size": 63488 00:10:20.414 }, 00:10:20.414 { 00:10:20.414 "name": "BaseBdev4", 00:10:20.414 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:20.414 "is_configured": true, 00:10:20.414 "data_offset": 2048, 00:10:20.414 "data_size": 63488 00:10:20.414 } 00:10:20.414 ] 00:10:20.414 }' 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.414 01:54:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u caecf39e-6b48-4751-8bfb-8e8aea461f07 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.673 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.673 [2024-12-07 01:54:26.094277] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:20.673 [2024-12-07 01:54:26.094520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:20.673 [2024-12-07 01:54:26.094569] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.673 [2024-12-07 01:54:26.094849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:20.673 NewBaseBdev 00:10:20.674 [2024-12-07 01:54:26.095016] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:20.674 [2024-12-07 01:54:26.095068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:20.674 [2024-12-07 01:54:26.095176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:20.674 [ 00:10:20.674 { 00:10:20.674 "name": "NewBaseBdev", 00:10:20.674 "aliases": [ 00:10:20.674 "caecf39e-6b48-4751-8bfb-8e8aea461f07" 00:10:20.674 ], 00:10:20.674 "product_name": "Malloc disk", 00:10:20.674 "block_size": 512, 00:10:20.674 "num_blocks": 65536, 00:10:20.674 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:20.674 "assigned_rate_limits": { 00:10:20.674 "rw_ios_per_sec": 0, 00:10:20.674 "rw_mbytes_per_sec": 0, 00:10:20.674 "r_mbytes_per_sec": 0, 00:10:20.674 "w_mbytes_per_sec": 0 00:10:20.674 }, 00:10:20.674 "claimed": true, 00:10:20.674 "claim_type": "exclusive_write", 00:10:20.674 "zoned": false, 00:10:20.674 "supported_io_types": { 00:10:20.674 "read": true, 00:10:20.674 "write": true, 00:10:20.674 "unmap": true, 00:10:20.674 "flush": true, 00:10:20.674 "reset": true, 00:10:20.674 "nvme_admin": false, 00:10:20.674 "nvme_io": false, 00:10:20.674 "nvme_io_md": false, 00:10:20.674 "write_zeroes": true, 00:10:20.674 "zcopy": true, 00:10:20.674 "get_zone_info": false, 00:10:20.674 "zone_management": false, 00:10:20.674 "zone_append": false, 00:10:20.674 "compare": false, 00:10:20.674 "compare_and_write": false, 00:10:20.674 "abort": true, 00:10:20.674 "seek_hole": false, 00:10:20.674 "seek_data": false, 00:10:20.674 "copy": true, 00:10:20.674 "nvme_iov_md": false 00:10:20.674 }, 00:10:20.674 "memory_domains": [ 00:10:20.674 { 00:10:20.674 "dma_device_id": "system", 00:10:20.674 "dma_device_type": 1 00:10:20.674 }, 00:10:20.674 { 00:10:20.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:20.674 "dma_device_type": 2 00:10:20.674 } 00:10:20.674 ], 00:10:20.674 "driver_specific": {} 00:10:20.674 } 00:10:20.674 ] 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:20.674 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.134 "name": "Existed_Raid", 00:10:21.134 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:21.134 "strip_size_kb": 0, 00:10:21.134 "state": "online", 00:10:21.134 "raid_level": "raid1", 00:10:21.134 "superblock": true, 00:10:21.134 "num_base_bdevs": 4, 00:10:21.134 "num_base_bdevs_discovered": 4, 00:10:21.134 "num_base_bdevs_operational": 4, 00:10:21.134 "base_bdevs_list": [ 00:10:21.134 { 00:10:21.134 "name": "NewBaseBdev", 00:10:21.134 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:21.134 "is_configured": true, 00:10:21.134 "data_offset": 2048, 00:10:21.134 "data_size": 63488 00:10:21.134 }, 00:10:21.134 { 00:10:21.134 "name": "BaseBdev2", 00:10:21.134 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:21.134 "is_configured": true, 00:10:21.134 "data_offset": 2048, 00:10:21.134 "data_size": 63488 00:10:21.134 }, 00:10:21.134 { 00:10:21.134 "name": "BaseBdev3", 00:10:21.134 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:21.134 "is_configured": true, 00:10:21.134 "data_offset": 2048, 00:10:21.134 "data_size": 63488 00:10:21.134 }, 00:10:21.134 { 00:10:21.134 "name": "BaseBdev4", 00:10:21.134 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:21.134 "is_configured": true, 00:10:21.134 "data_offset": 2048, 00:10:21.134 "data_size": 63488 00:10:21.134 } 00:10:21.134 ] 00:10:21.134 }' 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:21.134 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.135 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.135 [2024-12-07 01:54:26.549885] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.423 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.423 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.423 "name": "Existed_Raid", 00:10:21.423 "aliases": [ 00:10:21.423 "31071fba-87e4-4608-b1da-a403e4374b76" 00:10:21.423 ], 00:10:21.423 "product_name": "Raid Volume", 00:10:21.423 "block_size": 512, 00:10:21.423 "num_blocks": 63488, 00:10:21.423 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:21.423 "assigned_rate_limits": { 00:10:21.423 "rw_ios_per_sec": 0, 00:10:21.423 "rw_mbytes_per_sec": 0, 00:10:21.423 "r_mbytes_per_sec": 0, 00:10:21.423 "w_mbytes_per_sec": 0 00:10:21.423 }, 00:10:21.423 "claimed": false, 00:10:21.423 "zoned": false, 00:10:21.423 "supported_io_types": { 00:10:21.423 "read": true, 00:10:21.423 "write": true, 00:10:21.423 "unmap": false, 00:10:21.423 "flush": false, 00:10:21.423 "reset": true, 00:10:21.423 "nvme_admin": false, 00:10:21.423 "nvme_io": false, 00:10:21.423 "nvme_io_md": false, 00:10:21.423 "write_zeroes": true, 00:10:21.423 "zcopy": false, 00:10:21.423 "get_zone_info": false, 00:10:21.423 "zone_management": false, 00:10:21.423 "zone_append": false, 00:10:21.423 "compare": false, 00:10:21.423 "compare_and_write": false, 00:10:21.423 "abort": false, 00:10:21.423 "seek_hole": false, 00:10:21.423 "seek_data": false, 00:10:21.423 "copy": false, 00:10:21.423 "nvme_iov_md": false 00:10:21.423 }, 00:10:21.423 "memory_domains": [ 00:10:21.423 { 00:10:21.423 "dma_device_id": "system", 00:10:21.423 "dma_device_type": 1 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.423 "dma_device_type": 2 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "system", 00:10:21.423 "dma_device_type": 1 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.423 "dma_device_type": 2 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "system", 00:10:21.423 "dma_device_type": 1 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.423 "dma_device_type": 2 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "system", 00:10:21.423 "dma_device_type": 1 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.423 "dma_device_type": 2 00:10:21.423 } 00:10:21.423 ], 00:10:21.423 "driver_specific": { 00:10:21.423 "raid": { 00:10:21.423 "uuid": "31071fba-87e4-4608-b1da-a403e4374b76", 00:10:21.423 "strip_size_kb": 0, 00:10:21.423 "state": "online", 00:10:21.423 "raid_level": "raid1", 00:10:21.423 "superblock": true, 00:10:21.423 "num_base_bdevs": 4, 00:10:21.423 "num_base_bdevs_discovered": 4, 00:10:21.423 "num_base_bdevs_operational": 4, 00:10:21.423 "base_bdevs_list": [ 00:10:21.423 { 00:10:21.423 "name": "NewBaseBdev", 00:10:21.423 "uuid": "caecf39e-6b48-4751-8bfb-8e8aea461f07", 00:10:21.423 "is_configured": true, 00:10:21.423 "data_offset": 2048, 00:10:21.423 "data_size": 63488 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "name": "BaseBdev2", 00:10:21.423 "uuid": "52e87ca9-6de9-4840-a7f5-d21532d7d688", 00:10:21.423 "is_configured": true, 00:10:21.423 "data_offset": 2048, 00:10:21.423 "data_size": 63488 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "name": "BaseBdev3", 00:10:21.423 "uuid": "320eacae-563e-480d-b808-3107cce68036", 00:10:21.423 "is_configured": true, 00:10:21.423 "data_offset": 2048, 00:10:21.423 "data_size": 63488 00:10:21.423 }, 00:10:21.423 { 00:10:21.423 "name": "BaseBdev4", 00:10:21.423 "uuid": "bfe42f37-ab91-4763-935c-33ec5277f435", 00:10:21.423 "is_configured": true, 00:10:21.423 "data_offset": 2048, 00:10:21.423 "data_size": 63488 00:10:21.423 } 00:10:21.424 ] 00:10:21.424 } 00:10:21.424 } 00:10:21.424 }' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:21.424 BaseBdev2 00:10:21.424 BaseBdev3 00:10:21.424 BaseBdev4' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.424 [2024-12-07 01:54:26.829034] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.424 [2024-12-07 01:54:26.829059] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.424 [2024-12-07 01:54:26.829127] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.424 [2024-12-07 01:54:26.829368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.424 [2024-12-07 01:54:26.829382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84347 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 84347 ']' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 84347 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84347 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84347' 00:10:21.424 killing process with pid 84347 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 84347 00:10:21.424 [2024-12-07 01:54:26.872573] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.424 01:54:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 84347 00:10:21.685 [2024-12-07 01:54:26.913060] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.946 01:54:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:21.946 00:10:21.946 real 0m9.359s 00:10:21.946 user 0m16.027s 00:10:21.946 sys 0m1.894s 00:10:21.946 01:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.946 ************************************ 00:10:21.946 END TEST raid_state_function_test_sb 00:10:21.946 ************************************ 00:10:21.946 01:54:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:21.946 01:54:27 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:21.946 01:54:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:21.946 01:54:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.946 01:54:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:21.946 ************************************ 00:10:21.946 START TEST raid_superblock_test 00:10:21.946 ************************************ 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84996 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84996 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84996 ']' 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.946 01:54:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.946 [2024-12-07 01:54:27.311224] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:21.946 [2024-12-07 01:54:27.311404] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84996 ] 00:10:22.207 [2024-12-07 01:54:27.450597] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.207 [2024-12-07 01:54:27.493593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.207 [2024-12-07 01:54:27.534987] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.207 [2024-12-07 01:54:27.535137] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.775 malloc1 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.775 [2024-12-07 01:54:28.168534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:22.775 [2024-12-07 01:54:28.168653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.775 [2024-12-07 01:54:28.168695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:22.775 [2024-12-07 01:54:28.168711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.775 [2024-12-07 01:54:28.170899] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.775 [2024-12-07 01:54:28.170980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:22.775 pt1 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.775 malloc2 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.775 [2024-12-07 01:54:28.205180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:22.775 [2024-12-07 01:54:28.205300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.775 [2024-12-07 01:54:28.205343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:22.775 [2024-12-07 01:54:28.205389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.775 [2024-12-07 01:54:28.207960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.775 [2024-12-07 01:54:28.208043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:22.775 pt2 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.775 malloc3 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.775 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 [2024-12-07 01:54:28.237767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:23.035 [2024-12-07 01:54:28.237882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.035 [2024-12-07 01:54:28.237921] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:23.035 [2024-12-07 01:54:28.237954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.035 [2024-12-07 01:54:28.240207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.035 [2024-12-07 01:54:28.240291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:23.035 pt3 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 malloc4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 [2024-12-07 01:54:28.270293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:23.035 [2024-12-07 01:54:28.270349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.035 [2024-12-07 01:54:28.270383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:23.035 [2024-12-07 01:54:28.270396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.035 [2024-12-07 01:54:28.272526] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.035 [2024-12-07 01:54:28.272561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:23.035 pt4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 [2024-12-07 01:54:28.282315] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:23.035 [2024-12-07 01:54:28.284189] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:23.035 [2024-12-07 01:54:28.284259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:23.035 [2024-12-07 01:54:28.284301] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:23.035 [2024-12-07 01:54:28.284461] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:23.035 [2024-12-07 01:54:28.284473] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.035 [2024-12-07 01:54:28.284734] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:23.035 [2024-12-07 01:54:28.284883] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:23.035 [2024-12-07 01:54:28.284900] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:23.035 [2024-12-07 01:54:28.285008] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.035 "name": "raid_bdev1", 00:10:23.035 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:23.035 "strip_size_kb": 0, 00:10:23.035 "state": "online", 00:10:23.035 "raid_level": "raid1", 00:10:23.035 "superblock": true, 00:10:23.035 "num_base_bdevs": 4, 00:10:23.035 "num_base_bdevs_discovered": 4, 00:10:23.035 "num_base_bdevs_operational": 4, 00:10:23.035 "base_bdevs_list": [ 00:10:23.035 { 00:10:23.035 "name": "pt1", 00:10:23.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.035 "is_configured": true, 00:10:23.035 "data_offset": 2048, 00:10:23.035 "data_size": 63488 00:10:23.035 }, 00:10:23.035 { 00:10:23.035 "name": "pt2", 00:10:23.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.035 "is_configured": true, 00:10:23.035 "data_offset": 2048, 00:10:23.035 "data_size": 63488 00:10:23.035 }, 00:10:23.035 { 00:10:23.035 "name": "pt3", 00:10:23.035 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.035 "is_configured": true, 00:10:23.035 "data_offset": 2048, 00:10:23.035 "data_size": 63488 00:10:23.035 }, 00:10:23.035 { 00:10:23.035 "name": "pt4", 00:10:23.035 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.035 "is_configured": true, 00:10:23.035 "data_offset": 2048, 00:10:23.035 "data_size": 63488 00:10:23.035 } 00:10:23.035 ] 00:10:23.035 }' 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.035 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.295 [2024-12-07 01:54:28.685984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.295 "name": "raid_bdev1", 00:10:23.295 "aliases": [ 00:10:23.295 "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70" 00:10:23.295 ], 00:10:23.295 "product_name": "Raid Volume", 00:10:23.295 "block_size": 512, 00:10:23.295 "num_blocks": 63488, 00:10:23.295 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:23.295 "assigned_rate_limits": { 00:10:23.295 "rw_ios_per_sec": 0, 00:10:23.295 "rw_mbytes_per_sec": 0, 00:10:23.295 "r_mbytes_per_sec": 0, 00:10:23.295 "w_mbytes_per_sec": 0 00:10:23.295 }, 00:10:23.295 "claimed": false, 00:10:23.295 "zoned": false, 00:10:23.295 "supported_io_types": { 00:10:23.295 "read": true, 00:10:23.295 "write": true, 00:10:23.295 "unmap": false, 00:10:23.295 "flush": false, 00:10:23.295 "reset": true, 00:10:23.295 "nvme_admin": false, 00:10:23.295 "nvme_io": false, 00:10:23.295 "nvme_io_md": false, 00:10:23.295 "write_zeroes": true, 00:10:23.295 "zcopy": false, 00:10:23.295 "get_zone_info": false, 00:10:23.295 "zone_management": false, 00:10:23.295 "zone_append": false, 00:10:23.295 "compare": false, 00:10:23.295 "compare_and_write": false, 00:10:23.295 "abort": false, 00:10:23.295 "seek_hole": false, 00:10:23.295 "seek_data": false, 00:10:23.295 "copy": false, 00:10:23.295 "nvme_iov_md": false 00:10:23.295 }, 00:10:23.295 "memory_domains": [ 00:10:23.295 { 00:10:23.295 "dma_device_id": "system", 00:10:23.295 "dma_device_type": 1 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.295 "dma_device_type": 2 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "system", 00:10:23.295 "dma_device_type": 1 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.295 "dma_device_type": 2 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "system", 00:10:23.295 "dma_device_type": 1 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.295 "dma_device_type": 2 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "system", 00:10:23.295 "dma_device_type": 1 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.295 "dma_device_type": 2 00:10:23.295 } 00:10:23.295 ], 00:10:23.295 "driver_specific": { 00:10:23.295 "raid": { 00:10:23.295 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:23.295 "strip_size_kb": 0, 00:10:23.295 "state": "online", 00:10:23.295 "raid_level": "raid1", 00:10:23.295 "superblock": true, 00:10:23.295 "num_base_bdevs": 4, 00:10:23.295 "num_base_bdevs_discovered": 4, 00:10:23.295 "num_base_bdevs_operational": 4, 00:10:23.295 "base_bdevs_list": [ 00:10:23.295 { 00:10:23.295 "name": "pt1", 00:10:23.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:23.295 "is_configured": true, 00:10:23.295 "data_offset": 2048, 00:10:23.295 "data_size": 63488 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "name": "pt2", 00:10:23.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:23.295 "is_configured": true, 00:10:23.295 "data_offset": 2048, 00:10:23.295 "data_size": 63488 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "name": "pt3", 00:10:23.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:23.295 "is_configured": true, 00:10:23.295 "data_offset": 2048, 00:10:23.295 "data_size": 63488 00:10:23.295 }, 00:10:23.295 { 00:10:23.295 "name": "pt4", 00:10:23.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:23.295 "is_configured": true, 00:10:23.295 "data_offset": 2048, 00:10:23.295 "data_size": 63488 00:10:23.295 } 00:10:23.295 ] 00:10:23.295 } 00:10:23.295 } 00:10:23.295 }' 00:10:23.295 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:23.555 pt2 00:10:23.555 pt3 00:10:23.555 pt4' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.555 01:54:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 [2024-12-07 01:54:29.033258] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1ee384fe-afa5-4b9b-bff4-84a0aa80ff70 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1ee384fe-afa5-4b9b-bff4-84a0aa80ff70 ']' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 [2024-12-07 01:54:29.064934] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.815 [2024-12-07 01:54:29.064998] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.815 [2024-12-07 01:54:29.065086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.815 [2024-12-07 01:54:29.065205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.815 [2024-12-07 01:54:29.065253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.815 [2024-12-07 01:54:29.224697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:23.815 [2024-12-07 01:54:29.226528] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:23.815 [2024-12-07 01:54:29.226570] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:23.815 [2024-12-07 01:54:29.226605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:23.815 [2024-12-07 01:54:29.226653] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:23.815 [2024-12-07 01:54:29.226710] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:23.815 [2024-12-07 01:54:29.226732] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:23.815 [2024-12-07 01:54:29.226748] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:23.815 [2024-12-07 01:54:29.226761] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.815 [2024-12-07 01:54:29.226770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:23.815 request: 00:10:23.815 { 00:10:23.815 "name": "raid_bdev1", 00:10:23.815 "raid_level": "raid1", 00:10:23.815 "base_bdevs": [ 00:10:23.815 "malloc1", 00:10:23.815 "malloc2", 00:10:23.815 "malloc3", 00:10:23.815 "malloc4" 00:10:23.815 ], 00:10:23.815 "superblock": false, 00:10:23.815 "method": "bdev_raid_create", 00:10:23.815 "req_id": 1 00:10:23.815 } 00:10:23.815 Got JSON-RPC error response 00:10:23.815 response: 00:10:23.815 { 00:10:23.815 "code": -17, 00:10:23.815 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:23.815 } 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.815 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.816 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:23.816 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.816 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.076 [2024-12-07 01:54:29.288539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:24.076 [2024-12-07 01:54:29.288621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.076 [2024-12-07 01:54:29.288658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:24.076 [2024-12-07 01:54:29.288693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.076 [2024-12-07 01:54:29.290785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.076 [2024-12-07 01:54:29.290850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:24.076 [2024-12-07 01:54:29.290937] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:24.076 [2024-12-07 01:54:29.291009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:24.076 pt1 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.076 "name": "raid_bdev1", 00:10:24.076 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:24.076 "strip_size_kb": 0, 00:10:24.076 "state": "configuring", 00:10:24.076 "raid_level": "raid1", 00:10:24.076 "superblock": true, 00:10:24.076 "num_base_bdevs": 4, 00:10:24.076 "num_base_bdevs_discovered": 1, 00:10:24.076 "num_base_bdevs_operational": 4, 00:10:24.076 "base_bdevs_list": [ 00:10:24.076 { 00:10:24.076 "name": "pt1", 00:10:24.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.076 "is_configured": true, 00:10:24.076 "data_offset": 2048, 00:10:24.076 "data_size": 63488 00:10:24.076 }, 00:10:24.076 { 00:10:24.076 "name": null, 00:10:24.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.076 "is_configured": false, 00:10:24.076 "data_offset": 2048, 00:10:24.076 "data_size": 63488 00:10:24.076 }, 00:10:24.076 { 00:10:24.076 "name": null, 00:10:24.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.076 "is_configured": false, 00:10:24.076 "data_offset": 2048, 00:10:24.076 "data_size": 63488 00:10:24.076 }, 00:10:24.076 { 00:10:24.076 "name": null, 00:10:24.076 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.076 "is_configured": false, 00:10:24.076 "data_offset": 2048, 00:10:24.076 "data_size": 63488 00:10:24.076 } 00:10:24.076 ] 00:10:24.076 }' 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.076 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 [2024-12-07 01:54:29.735812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.336 [2024-12-07 01:54:29.735896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.336 [2024-12-07 01:54:29.735920] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:24.336 [2024-12-07 01:54:29.735929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.336 [2024-12-07 01:54:29.736336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.336 [2024-12-07 01:54:29.736359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.336 [2024-12-07 01:54:29.736441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.336 [2024-12-07 01:54:29.736479] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.336 pt2 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 [2024-12-07 01:54:29.747810] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.336 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.596 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.596 "name": "raid_bdev1", 00:10:24.596 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:24.596 "strip_size_kb": 0, 00:10:24.596 "state": "configuring", 00:10:24.596 "raid_level": "raid1", 00:10:24.596 "superblock": true, 00:10:24.596 "num_base_bdevs": 4, 00:10:24.596 "num_base_bdevs_discovered": 1, 00:10:24.596 "num_base_bdevs_operational": 4, 00:10:24.596 "base_bdevs_list": [ 00:10:24.596 { 00:10:24.596 "name": "pt1", 00:10:24.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.596 "is_configured": true, 00:10:24.596 "data_offset": 2048, 00:10:24.596 "data_size": 63488 00:10:24.596 }, 00:10:24.596 { 00:10:24.596 "name": null, 00:10:24.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.596 "is_configured": false, 00:10:24.596 "data_offset": 0, 00:10:24.596 "data_size": 63488 00:10:24.596 }, 00:10:24.596 { 00:10:24.596 "name": null, 00:10:24.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.596 "is_configured": false, 00:10:24.596 "data_offset": 2048, 00:10:24.596 "data_size": 63488 00:10:24.596 }, 00:10:24.596 { 00:10:24.596 "name": null, 00:10:24.596 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.596 "is_configured": false, 00:10:24.596 "data_offset": 2048, 00:10:24.596 "data_size": 63488 00:10:24.596 } 00:10:24.596 ] 00:10:24.596 }' 00:10:24.596 01:54:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.596 01:54:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:24.855 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.855 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:24.855 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.855 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.855 [2024-12-07 01:54:30.187124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:24.855 [2024-12-07 01:54:30.187270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.856 [2024-12-07 01:54:30.187308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:24.856 [2024-12-07 01:54:30.187341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.856 [2024-12-07 01:54:30.187781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.856 [2024-12-07 01:54:30.187846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:24.856 [2024-12-07 01:54:30.187957] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:24.856 [2024-12-07 01:54:30.188009] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:24.856 pt2 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.856 [2024-12-07 01:54:30.199039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:24.856 [2024-12-07 01:54:30.199133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.856 [2024-12-07 01:54:30.199192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:24.856 [2024-12-07 01:54:30.199222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.856 [2024-12-07 01:54:30.199589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.856 [2024-12-07 01:54:30.199650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:24.856 [2024-12-07 01:54:30.199753] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:24.856 [2024-12-07 01:54:30.199828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:24.856 pt3 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.856 [2024-12-07 01:54:30.211026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:24.856 [2024-12-07 01:54:30.211133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.856 [2024-12-07 01:54:30.211166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:24.856 [2024-12-07 01:54:30.211195] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.856 [2024-12-07 01:54:30.211542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.856 [2024-12-07 01:54:30.211596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:24.856 [2024-12-07 01:54:30.211656] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:24.856 [2024-12-07 01:54:30.211693] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:24.856 [2024-12-07 01:54:30.211802] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:24.856 [2024-12-07 01:54:30.211813] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.856 [2024-12-07 01:54:30.212031] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:24.856 [2024-12-07 01:54:30.212159] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:24.856 [2024-12-07 01:54:30.212168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:24.856 [2024-12-07 01:54:30.212270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.856 pt4 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.856 "name": "raid_bdev1", 00:10:24.856 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:24.856 "strip_size_kb": 0, 00:10:24.856 "state": "online", 00:10:24.856 "raid_level": "raid1", 00:10:24.856 "superblock": true, 00:10:24.856 "num_base_bdevs": 4, 00:10:24.856 "num_base_bdevs_discovered": 4, 00:10:24.856 "num_base_bdevs_operational": 4, 00:10:24.856 "base_bdevs_list": [ 00:10:24.856 { 00:10:24.856 "name": "pt1", 00:10:24.856 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:24.856 "is_configured": true, 00:10:24.856 "data_offset": 2048, 00:10:24.856 "data_size": 63488 00:10:24.856 }, 00:10:24.856 { 00:10:24.856 "name": "pt2", 00:10:24.856 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:24.856 "is_configured": true, 00:10:24.856 "data_offset": 2048, 00:10:24.856 "data_size": 63488 00:10:24.856 }, 00:10:24.856 { 00:10:24.856 "name": "pt3", 00:10:24.856 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:24.856 "is_configured": true, 00:10:24.856 "data_offset": 2048, 00:10:24.856 "data_size": 63488 00:10:24.856 }, 00:10:24.856 { 00:10:24.856 "name": "pt4", 00:10:24.856 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:24.856 "is_configured": true, 00:10:24.856 "data_offset": 2048, 00:10:24.856 "data_size": 63488 00:10:24.856 } 00:10:24.856 ] 00:10:24.856 }' 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.856 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:25.425 [2024-12-07 01:54:30.610691] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.425 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:25.425 "name": "raid_bdev1", 00:10:25.425 "aliases": [ 00:10:25.425 "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70" 00:10:25.425 ], 00:10:25.425 "product_name": "Raid Volume", 00:10:25.425 "block_size": 512, 00:10:25.425 "num_blocks": 63488, 00:10:25.425 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:25.425 "assigned_rate_limits": { 00:10:25.425 "rw_ios_per_sec": 0, 00:10:25.425 "rw_mbytes_per_sec": 0, 00:10:25.425 "r_mbytes_per_sec": 0, 00:10:25.425 "w_mbytes_per_sec": 0 00:10:25.425 }, 00:10:25.425 "claimed": false, 00:10:25.425 "zoned": false, 00:10:25.425 "supported_io_types": { 00:10:25.425 "read": true, 00:10:25.425 "write": true, 00:10:25.425 "unmap": false, 00:10:25.425 "flush": false, 00:10:25.425 "reset": true, 00:10:25.425 "nvme_admin": false, 00:10:25.425 "nvme_io": false, 00:10:25.425 "nvme_io_md": false, 00:10:25.425 "write_zeroes": true, 00:10:25.425 "zcopy": false, 00:10:25.425 "get_zone_info": false, 00:10:25.425 "zone_management": false, 00:10:25.425 "zone_append": false, 00:10:25.425 "compare": false, 00:10:25.425 "compare_and_write": false, 00:10:25.425 "abort": false, 00:10:25.425 "seek_hole": false, 00:10:25.425 "seek_data": false, 00:10:25.425 "copy": false, 00:10:25.425 "nvme_iov_md": false 00:10:25.425 }, 00:10:25.425 "memory_domains": [ 00:10:25.425 { 00:10:25.425 "dma_device_id": "system", 00:10:25.425 "dma_device_type": 1 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.425 "dma_device_type": 2 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "system", 00:10:25.425 "dma_device_type": 1 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.425 "dma_device_type": 2 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "system", 00:10:25.425 "dma_device_type": 1 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.425 "dma_device_type": 2 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "system", 00:10:25.425 "dma_device_type": 1 00:10:25.425 }, 00:10:25.425 { 00:10:25.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.425 "dma_device_type": 2 00:10:25.425 } 00:10:25.425 ], 00:10:25.425 "driver_specific": { 00:10:25.425 "raid": { 00:10:25.425 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:25.425 "strip_size_kb": 0, 00:10:25.425 "state": "online", 00:10:25.425 "raid_level": "raid1", 00:10:25.426 "superblock": true, 00:10:25.426 "num_base_bdevs": 4, 00:10:25.426 "num_base_bdevs_discovered": 4, 00:10:25.426 "num_base_bdevs_operational": 4, 00:10:25.426 "base_bdevs_list": [ 00:10:25.426 { 00:10:25.426 "name": "pt1", 00:10:25.426 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:25.426 "is_configured": true, 00:10:25.426 "data_offset": 2048, 00:10:25.426 "data_size": 63488 00:10:25.426 }, 00:10:25.426 { 00:10:25.426 "name": "pt2", 00:10:25.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.426 "is_configured": true, 00:10:25.426 "data_offset": 2048, 00:10:25.426 "data_size": 63488 00:10:25.426 }, 00:10:25.426 { 00:10:25.426 "name": "pt3", 00:10:25.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.426 "is_configured": true, 00:10:25.426 "data_offset": 2048, 00:10:25.426 "data_size": 63488 00:10:25.426 }, 00:10:25.426 { 00:10:25.426 "name": "pt4", 00:10:25.426 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.426 "is_configured": true, 00:10:25.426 "data_offset": 2048, 00:10:25.426 "data_size": 63488 00:10:25.426 } 00:10:25.426 ] 00:10:25.426 } 00:10:25.426 } 00:10:25.426 }' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:25.426 pt2 00:10:25.426 pt3 00:10:25.426 pt4' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.426 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:25.426 [2024-12-07 01:54:30.882148] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1ee384fe-afa5-4b9b-bff4-84a0aa80ff70 '!=' 1ee384fe-afa5-4b9b-bff4-84a0aa80ff70 ']' 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.686 [2024-12-07 01:54:30.929796] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.686 "name": "raid_bdev1", 00:10:25.686 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:25.686 "strip_size_kb": 0, 00:10:25.686 "state": "online", 00:10:25.686 "raid_level": "raid1", 00:10:25.686 "superblock": true, 00:10:25.686 "num_base_bdevs": 4, 00:10:25.686 "num_base_bdevs_discovered": 3, 00:10:25.686 "num_base_bdevs_operational": 3, 00:10:25.686 "base_bdevs_list": [ 00:10:25.686 { 00:10:25.686 "name": null, 00:10:25.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.686 "is_configured": false, 00:10:25.686 "data_offset": 0, 00:10:25.686 "data_size": 63488 00:10:25.686 }, 00:10:25.686 { 00:10:25.686 "name": "pt2", 00:10:25.686 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:25.686 "is_configured": true, 00:10:25.686 "data_offset": 2048, 00:10:25.686 "data_size": 63488 00:10:25.686 }, 00:10:25.686 { 00:10:25.686 "name": "pt3", 00:10:25.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:25.686 "is_configured": true, 00:10:25.686 "data_offset": 2048, 00:10:25.686 "data_size": 63488 00:10:25.686 }, 00:10:25.686 { 00:10:25.686 "name": "pt4", 00:10:25.686 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:25.686 "is_configured": true, 00:10:25.686 "data_offset": 2048, 00:10:25.686 "data_size": 63488 00:10:25.686 } 00:10:25.686 ] 00:10:25.686 }' 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.686 01:54:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.946 [2024-12-07 01:54:31.361032] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.946 [2024-12-07 01:54:31.361100] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.946 [2024-12-07 01:54:31.361197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.946 [2024-12-07 01:54:31.361297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.946 [2024-12-07 01:54:31.361349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.946 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.206 [2024-12-07 01:54:31.444861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:26.206 [2024-12-07 01:54:31.444911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.206 [2024-12-07 01:54:31.444926] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:26.206 [2024-12-07 01:54:31.444936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.206 [2024-12-07 01:54:31.447024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.206 [2024-12-07 01:54:31.447070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:26.206 [2024-12-07 01:54:31.447153] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:26.206 [2024-12-07 01:54:31.447186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:26.206 pt2 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.206 "name": "raid_bdev1", 00:10:26.206 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:26.206 "strip_size_kb": 0, 00:10:26.206 "state": "configuring", 00:10:26.206 "raid_level": "raid1", 00:10:26.206 "superblock": true, 00:10:26.206 "num_base_bdevs": 4, 00:10:26.206 "num_base_bdevs_discovered": 1, 00:10:26.206 "num_base_bdevs_operational": 3, 00:10:26.206 "base_bdevs_list": [ 00:10:26.206 { 00:10:26.206 "name": null, 00:10:26.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.206 "is_configured": false, 00:10:26.206 "data_offset": 2048, 00:10:26.206 "data_size": 63488 00:10:26.206 }, 00:10:26.206 { 00:10:26.206 "name": "pt2", 00:10:26.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.206 "is_configured": true, 00:10:26.206 "data_offset": 2048, 00:10:26.206 "data_size": 63488 00:10:26.206 }, 00:10:26.206 { 00:10:26.206 "name": null, 00:10:26.206 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.206 "is_configured": false, 00:10:26.206 "data_offset": 2048, 00:10:26.206 "data_size": 63488 00:10:26.206 }, 00:10:26.206 { 00:10:26.206 "name": null, 00:10:26.206 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.206 "is_configured": false, 00:10:26.206 "data_offset": 2048, 00:10:26.206 "data_size": 63488 00:10:26.206 } 00:10:26.206 ] 00:10:26.206 }' 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.206 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.466 [2024-12-07 01:54:31.864212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:26.466 [2024-12-07 01:54:31.864324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.466 [2024-12-07 01:54:31.864362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:26.466 [2024-12-07 01:54:31.864396] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.466 [2024-12-07 01:54:31.864831] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.466 [2024-12-07 01:54:31.864891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:26.466 [2024-12-07 01:54:31.864991] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:26.466 [2024-12-07 01:54:31.865051] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:26.466 pt3 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.466 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.467 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.467 "name": "raid_bdev1", 00:10:26.467 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:26.467 "strip_size_kb": 0, 00:10:26.467 "state": "configuring", 00:10:26.467 "raid_level": "raid1", 00:10:26.467 "superblock": true, 00:10:26.467 "num_base_bdevs": 4, 00:10:26.467 "num_base_bdevs_discovered": 2, 00:10:26.467 "num_base_bdevs_operational": 3, 00:10:26.467 "base_bdevs_list": [ 00:10:26.467 { 00:10:26.467 "name": null, 00:10:26.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.467 "is_configured": false, 00:10:26.467 "data_offset": 2048, 00:10:26.467 "data_size": 63488 00:10:26.467 }, 00:10:26.467 { 00:10:26.467 "name": "pt2", 00:10:26.467 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:26.467 "is_configured": true, 00:10:26.467 "data_offset": 2048, 00:10:26.467 "data_size": 63488 00:10:26.467 }, 00:10:26.467 { 00:10:26.467 "name": "pt3", 00:10:26.467 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:26.467 "is_configured": true, 00:10:26.467 "data_offset": 2048, 00:10:26.467 "data_size": 63488 00:10:26.467 }, 00:10:26.467 { 00:10:26.467 "name": null, 00:10:26.467 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:26.467 "is_configured": false, 00:10:26.467 "data_offset": 2048, 00:10:26.467 "data_size": 63488 00:10:26.467 } 00:10:26.467 ] 00:10:26.467 }' 00:10:26.467 01:54:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.467 01:54:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.034 [2024-12-07 01:54:32.343375] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:27.034 [2024-12-07 01:54:32.343492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.034 [2024-12-07 01:54:32.343516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:27.034 [2024-12-07 01:54:32.343528] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.034 [2024-12-07 01:54:32.343967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.034 [2024-12-07 01:54:32.343990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:27.034 [2024-12-07 01:54:32.344071] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:27.034 [2024-12-07 01:54:32.344095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:27.034 [2024-12-07 01:54:32.344192] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:27.034 [2024-12-07 01:54:32.344204] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.034 [2024-12-07 01:54:32.344441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:27.034 [2024-12-07 01:54:32.344573] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:27.034 [2024-12-07 01:54:32.344589] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:27.034 [2024-12-07 01:54:32.344716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.034 pt4 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.034 "name": "raid_bdev1", 00:10:27.034 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:27.034 "strip_size_kb": 0, 00:10:27.034 "state": "online", 00:10:27.034 "raid_level": "raid1", 00:10:27.034 "superblock": true, 00:10:27.034 "num_base_bdevs": 4, 00:10:27.034 "num_base_bdevs_discovered": 3, 00:10:27.034 "num_base_bdevs_operational": 3, 00:10:27.034 "base_bdevs_list": [ 00:10:27.034 { 00:10:27.034 "name": null, 00:10:27.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.034 "is_configured": false, 00:10:27.034 "data_offset": 2048, 00:10:27.034 "data_size": 63488 00:10:27.034 }, 00:10:27.034 { 00:10:27.034 "name": "pt2", 00:10:27.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.034 "is_configured": true, 00:10:27.034 "data_offset": 2048, 00:10:27.034 "data_size": 63488 00:10:27.034 }, 00:10:27.034 { 00:10:27.034 "name": "pt3", 00:10:27.034 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.034 "is_configured": true, 00:10:27.034 "data_offset": 2048, 00:10:27.034 "data_size": 63488 00:10:27.034 }, 00:10:27.034 { 00:10:27.034 "name": "pt4", 00:10:27.034 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.034 "is_configured": true, 00:10:27.034 "data_offset": 2048, 00:10:27.034 "data_size": 63488 00:10:27.034 } 00:10:27.034 ] 00:10:27.034 }' 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.034 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.293 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:27.293 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.293 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.293 [2024-12-07 01:54:32.746791] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.293 [2024-12-07 01:54:32.746867] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.293 [2024-12-07 01:54:32.746963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.293 [2024-12-07 01:54:32.747074] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.293 [2024-12-07 01:54:32.747142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:27.293 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.552 [2024-12-07 01:54:32.822611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:27.552 [2024-12-07 01:54:32.822677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.552 [2024-12-07 01:54:32.822698] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:27.552 [2024-12-07 01:54:32.822723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.552 [2024-12-07 01:54:32.824892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.552 [2024-12-07 01:54:32.824925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:27.552 [2024-12-07 01:54:32.824993] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:27.552 [2024-12-07 01:54:32.825036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:27.552 [2024-12-07 01:54:32.825156] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:27.552 [2024-12-07 01:54:32.825168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:27.552 [2024-12-07 01:54:32.825189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:10:27.552 [2024-12-07 01:54:32.825217] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:27.552 [2024-12-07 01:54:32.825295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:27.552 pt1 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.552 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.552 "name": "raid_bdev1", 00:10:27.552 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:27.552 "strip_size_kb": 0, 00:10:27.552 "state": "configuring", 00:10:27.552 "raid_level": "raid1", 00:10:27.552 "superblock": true, 00:10:27.552 "num_base_bdevs": 4, 00:10:27.552 "num_base_bdevs_discovered": 2, 00:10:27.552 "num_base_bdevs_operational": 3, 00:10:27.552 "base_bdevs_list": [ 00:10:27.552 { 00:10:27.552 "name": null, 00:10:27.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.552 "is_configured": false, 00:10:27.552 "data_offset": 2048, 00:10:27.552 "data_size": 63488 00:10:27.552 }, 00:10:27.552 { 00:10:27.552 "name": "pt2", 00:10:27.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:27.552 "is_configured": true, 00:10:27.552 "data_offset": 2048, 00:10:27.552 "data_size": 63488 00:10:27.552 }, 00:10:27.552 { 00:10:27.552 "name": "pt3", 00:10:27.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:27.552 "is_configured": true, 00:10:27.552 "data_offset": 2048, 00:10:27.552 "data_size": 63488 00:10:27.552 }, 00:10:27.552 { 00:10:27.552 "name": null, 00:10:27.552 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:27.552 "is_configured": false, 00:10:27.552 "data_offset": 2048, 00:10:27.553 "data_size": 63488 00:10:27.553 } 00:10:27.553 ] 00:10:27.553 }' 00:10:27.553 01:54:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.553 01:54:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.811 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:27.811 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:27.811 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.811 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.070 [2024-12-07 01:54:33.317821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:28.070 [2024-12-07 01:54:33.317945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.070 [2024-12-07 01:54:33.317996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:28.070 [2024-12-07 01:54:33.318033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.070 [2024-12-07 01:54:33.318480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.070 [2024-12-07 01:54:33.318549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:28.070 [2024-12-07 01:54:33.318673] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:28.070 [2024-12-07 01:54:33.318735] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:28.070 [2024-12-07 01:54:33.318898] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:28.070 [2024-12-07 01:54:33.318948] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:28.070 [2024-12-07 01:54:33.319243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:10:28.070 [2024-12-07 01:54:33.319424] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:28.070 [2024-12-07 01:54:33.319468] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:28.070 [2024-12-07 01:54:33.319632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.070 pt4 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.070 "name": "raid_bdev1", 00:10:28.070 "uuid": "1ee384fe-afa5-4b9b-bff4-84a0aa80ff70", 00:10:28.070 "strip_size_kb": 0, 00:10:28.070 "state": "online", 00:10:28.070 "raid_level": "raid1", 00:10:28.070 "superblock": true, 00:10:28.070 "num_base_bdevs": 4, 00:10:28.070 "num_base_bdevs_discovered": 3, 00:10:28.070 "num_base_bdevs_operational": 3, 00:10:28.070 "base_bdevs_list": [ 00:10:28.070 { 00:10:28.070 "name": null, 00:10:28.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.070 "is_configured": false, 00:10:28.070 "data_offset": 2048, 00:10:28.070 "data_size": 63488 00:10:28.070 }, 00:10:28.070 { 00:10:28.070 "name": "pt2", 00:10:28.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:28.070 "is_configured": true, 00:10:28.070 "data_offset": 2048, 00:10:28.070 "data_size": 63488 00:10:28.070 }, 00:10:28.070 { 00:10:28.070 "name": "pt3", 00:10:28.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:28.070 "is_configured": true, 00:10:28.070 "data_offset": 2048, 00:10:28.070 "data_size": 63488 00:10:28.070 }, 00:10:28.070 { 00:10:28.070 "name": "pt4", 00:10:28.070 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:28.070 "is_configured": true, 00:10:28.070 "data_offset": 2048, 00:10:28.070 "data_size": 63488 00:10:28.070 } 00:10:28.070 ] 00:10:28.070 }' 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.070 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.328 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:28.328 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:28.328 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.328 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.328 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.587 [2024-12-07 01:54:33.817287] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 1ee384fe-afa5-4b9b-bff4-84a0aa80ff70 '!=' 1ee384fe-afa5-4b9b-bff4-84a0aa80ff70 ']' 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84996 00:10:28.587 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84996 ']' 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84996 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84996 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84996' 00:10:28.588 killing process with pid 84996 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 84996 00:10:28.588 [2024-12-07 01:54:33.886608] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:28.588 [2024-12-07 01:54:33.886777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:28.588 01:54:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 84996 00:10:28.588 [2024-12-07 01:54:33.886882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:28.588 [2024-12-07 01:54:33.886894] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:28.588 [2024-12-07 01:54:33.930219] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.846 01:54:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:28.846 00:10:28.846 real 0m6.940s 00:10:28.846 user 0m11.681s 00:10:28.846 sys 0m1.445s 00:10:28.846 01:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.846 01:54:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.846 ************************************ 00:10:28.846 END TEST raid_superblock_test 00:10:28.846 ************************************ 00:10:28.846 01:54:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:28.846 01:54:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:28.846 01:54:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.846 01:54:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.846 ************************************ 00:10:28.846 START TEST raid_read_error_test 00:10:28.846 ************************************ 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ESASz6Xkgq 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85466 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85466 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 85466 ']' 00:10:28.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.846 01:54:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.105 [2024-12-07 01:54:34.344058] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:29.105 [2024-12-07 01:54:34.344190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85466 ] 00:10:29.105 [2024-12-07 01:54:34.488316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.105 [2024-12-07 01:54:34.536111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.362 [2024-12-07 01:54:34.577826] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.362 [2024-12-07 01:54:34.577944] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.927 BaseBdev1_malloc 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.927 true 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.927 [2024-12-07 01:54:35.199940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.927 [2024-12-07 01:54:35.200016] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.927 [2024-12-07 01:54:35.200040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:29.927 [2024-12-07 01:54:35.200049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.927 [2024-12-07 01:54:35.202288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.927 [2024-12-07 01:54:35.202354] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.927 BaseBdev1 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.927 BaseBdev2_malloc 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.927 true 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.927 [2024-12-07 01:54:35.247984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.927 [2024-12-07 01:54:35.248109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.927 [2024-12-07 01:54:35.248140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:29.927 [2024-12-07 01:54:35.248150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.927 [2024-12-07 01:54:35.250288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.927 [2024-12-07 01:54:35.250323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.927 BaseBdev2 00:10:29.927 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 BaseBdev3_malloc 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 true 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 [2024-12-07 01:54:35.288360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.928 [2024-12-07 01:54:35.288406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.928 [2024-12-07 01:54:35.288442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:29.928 [2024-12-07 01:54:35.288451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.928 [2024-12-07 01:54:35.290497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.928 [2024-12-07 01:54:35.290570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.928 BaseBdev3 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 BaseBdev4_malloc 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 true 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 [2024-12-07 01:54:35.328775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:29.928 [2024-12-07 01:54:35.328818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.928 [2024-12-07 01:54:35.328856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:29.928 [2024-12-07 01:54:35.328865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.928 [2024-12-07 01:54:35.330930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.928 [2024-12-07 01:54:35.331013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:29.928 BaseBdev4 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 [2024-12-07 01:54:35.340821] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.928 [2024-12-07 01:54:35.342594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.928 [2024-12-07 01:54:35.342668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.928 [2024-12-07 01:54:35.342750] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.928 [2024-12-07 01:54:35.342940] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:29.928 [2024-12-07 01:54:35.342950] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.928 [2024-12-07 01:54:35.343212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:29.928 [2024-12-07 01:54:35.343363] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:29.928 [2024-12-07 01:54:35.343381] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:29.928 [2024-12-07 01:54:35.343501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.186 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.186 "name": "raid_bdev1", 00:10:30.186 "uuid": "46438297-078d-42f4-ab1b-68dc5aeefb72", 00:10:30.186 "strip_size_kb": 0, 00:10:30.186 "state": "online", 00:10:30.186 "raid_level": "raid1", 00:10:30.186 "superblock": true, 00:10:30.186 "num_base_bdevs": 4, 00:10:30.186 "num_base_bdevs_discovered": 4, 00:10:30.186 "num_base_bdevs_operational": 4, 00:10:30.186 "base_bdevs_list": [ 00:10:30.186 { 00:10:30.186 "name": "BaseBdev1", 00:10:30.186 "uuid": "5ff5ee35-51a5-5385-a0c8-224314caa794", 00:10:30.186 "is_configured": true, 00:10:30.186 "data_offset": 2048, 00:10:30.186 "data_size": 63488 00:10:30.186 }, 00:10:30.186 { 00:10:30.186 "name": "BaseBdev2", 00:10:30.186 "uuid": "3b1b0843-1593-563e-838b-6d3e7a025e1f", 00:10:30.186 "is_configured": true, 00:10:30.186 "data_offset": 2048, 00:10:30.186 "data_size": 63488 00:10:30.186 }, 00:10:30.186 { 00:10:30.186 "name": "BaseBdev3", 00:10:30.186 "uuid": "8764112d-37da-5514-aac8-d88b4f4f4cbf", 00:10:30.186 "is_configured": true, 00:10:30.186 "data_offset": 2048, 00:10:30.186 "data_size": 63488 00:10:30.186 }, 00:10:30.186 { 00:10:30.186 "name": "BaseBdev4", 00:10:30.186 "uuid": "92e313e7-78d5-5efd-89ab-8d274d4fe65a", 00:10:30.186 "is_configured": true, 00:10:30.186 "data_offset": 2048, 00:10:30.186 "data_size": 63488 00:10:30.186 } 00:10:30.186 ] 00:10:30.186 }' 00:10:30.186 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.186 01:54:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.444 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:30.444 01:54:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:30.444 [2024-12-07 01:54:35.812359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.380 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.380 "name": "raid_bdev1", 00:10:31.380 "uuid": "46438297-078d-42f4-ab1b-68dc5aeefb72", 00:10:31.380 "strip_size_kb": 0, 00:10:31.380 "state": "online", 00:10:31.381 "raid_level": "raid1", 00:10:31.381 "superblock": true, 00:10:31.381 "num_base_bdevs": 4, 00:10:31.381 "num_base_bdevs_discovered": 4, 00:10:31.381 "num_base_bdevs_operational": 4, 00:10:31.381 "base_bdevs_list": [ 00:10:31.381 { 00:10:31.381 "name": "BaseBdev1", 00:10:31.381 "uuid": "5ff5ee35-51a5-5385-a0c8-224314caa794", 00:10:31.381 "is_configured": true, 00:10:31.381 "data_offset": 2048, 00:10:31.381 "data_size": 63488 00:10:31.381 }, 00:10:31.381 { 00:10:31.381 "name": "BaseBdev2", 00:10:31.381 "uuid": "3b1b0843-1593-563e-838b-6d3e7a025e1f", 00:10:31.381 "is_configured": true, 00:10:31.381 "data_offset": 2048, 00:10:31.381 "data_size": 63488 00:10:31.381 }, 00:10:31.381 { 00:10:31.381 "name": "BaseBdev3", 00:10:31.381 "uuid": "8764112d-37da-5514-aac8-d88b4f4f4cbf", 00:10:31.381 "is_configured": true, 00:10:31.381 "data_offset": 2048, 00:10:31.381 "data_size": 63488 00:10:31.381 }, 00:10:31.381 { 00:10:31.381 "name": "BaseBdev4", 00:10:31.381 "uuid": "92e313e7-78d5-5efd-89ab-8d274d4fe65a", 00:10:31.381 "is_configured": true, 00:10:31.381 "data_offset": 2048, 00:10:31.381 "data_size": 63488 00:10:31.381 } 00:10:31.381 ] 00:10:31.381 }' 00:10:31.381 01:54:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.381 01:54:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.948 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.948 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.948 [2024-12-07 01:54:37.228165] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.948 [2024-12-07 01:54:37.228199] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.948 [2024-12-07 01:54:37.230804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.948 [2024-12-07 01:54:37.230898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.948 [2024-12-07 01:54:37.231028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.948 [2024-12-07 01:54:37.231038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:31.948 { 00:10:31.948 "results": [ 00:10:31.948 { 00:10:31.948 "job": "raid_bdev1", 00:10:31.948 "core_mask": "0x1", 00:10:31.948 "workload": "randrw", 00:10:31.948 "percentage": 50, 00:10:31.948 "status": "finished", 00:10:31.948 "queue_depth": 1, 00:10:31.948 "io_size": 131072, 00:10:31.948 "runtime": 1.416787, 00:10:31.948 "iops": 11377.151258446047, 00:10:31.948 "mibps": 1422.1439073057559, 00:10:31.948 "io_failed": 0, 00:10:31.948 "io_timeout": 0, 00:10:31.948 "avg_latency_us": 85.2782634396848, 00:10:31.949 "min_latency_us": 22.134497816593885, 00:10:31.949 "max_latency_us": 1445.2262008733624 00:10:31.949 } 00:10:31.949 ], 00:10:31.949 "core_count": 1 00:10:31.949 } 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85466 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 85466 ']' 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 85466 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85466 00:10:31.949 killing process with pid 85466 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85466' 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 85466 00:10:31.949 [2024-12-07 01:54:37.278397] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.949 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 85466 00:10:31.949 [2024-12-07 01:54:37.314139] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ESASz6Xkgq 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:32.209 ************************************ 00:10:32.209 END TEST raid_read_error_test 00:10:32.209 ************************************ 00:10:32.209 00:10:32.209 real 0m3.315s 00:10:32.209 user 0m4.155s 00:10:32.209 sys 0m0.518s 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.209 01:54:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.209 01:54:37 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:32.209 01:54:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:32.209 01:54:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.209 01:54:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.209 ************************************ 00:10:32.209 START TEST raid_write_error_test 00:10:32.209 ************************************ 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YEeE0fjTuY 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85601 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85601 00:10:32.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 85601 ']' 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:32.209 01:54:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.477 [2024-12-07 01:54:37.731306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:32.477 [2024-12-07 01:54:37.731429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85601 ] 00:10:32.477 [2024-12-07 01:54:37.876926] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.477 [2024-12-07 01:54:37.924353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.747 [2024-12-07 01:54:37.966351] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.747 [2024-12-07 01:54:37.966387] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.315 BaseBdev1_malloc 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.315 true 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.315 [2024-12-07 01:54:38.599827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:33.315 [2024-12-07 01:54:38.599922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.315 [2024-12-07 01:54:38.599949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:33.315 [2024-12-07 01:54:38.599959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.315 [2024-12-07 01:54:38.602097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.315 [2024-12-07 01:54:38.602135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:33.315 BaseBdev1 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.315 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.315 BaseBdev2_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 true 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 [2024-12-07 01:54:38.652329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:33.316 [2024-12-07 01:54:38.652383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.316 [2024-12-07 01:54:38.652404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:33.316 [2024-12-07 01:54:38.652413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.316 [2024-12-07 01:54:38.654474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.316 [2024-12-07 01:54:38.654508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:33.316 BaseBdev2 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 BaseBdev3_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 true 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 [2024-12-07 01:54:38.692624] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:33.316 [2024-12-07 01:54:38.692678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.316 [2024-12-07 01:54:38.692697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:33.316 [2024-12-07 01:54:38.692721] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.316 [2024-12-07 01:54:38.694729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.316 [2024-12-07 01:54:38.694798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:33.316 BaseBdev3 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 BaseBdev4_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 true 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 [2024-12-07 01:54:38.732965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:33.316 [2024-12-07 01:54:38.733052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:33.316 [2024-12-07 01:54:38.733089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:33.316 [2024-12-07 01:54:38.733127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:33.316 [2024-12-07 01:54:38.735236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:33.316 [2024-12-07 01:54:38.735306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:33.316 BaseBdev4 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.316 [2024-12-07 01:54:38.745005] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.316 [2024-12-07 01:54:38.746801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.316 [2024-12-07 01:54:38.746910] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:33.316 [2024-12-07 01:54:38.747004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.316 [2024-12-07 01:54:38.747252] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:33.316 [2024-12-07 01:54:38.747299] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:33.316 [2024-12-07 01:54:38.747567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:33.316 [2024-12-07 01:54:38.747762] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:33.316 [2024-12-07 01:54:38.747811] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:33.316 [2024-12-07 01:54:38.747988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.316 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.575 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.575 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.575 "name": "raid_bdev1", 00:10:33.575 "uuid": "41ec1b88-10f6-4c92-bffc-b4fa4ef86d12", 00:10:33.575 "strip_size_kb": 0, 00:10:33.575 "state": "online", 00:10:33.575 "raid_level": "raid1", 00:10:33.575 "superblock": true, 00:10:33.575 "num_base_bdevs": 4, 00:10:33.575 "num_base_bdevs_discovered": 4, 00:10:33.575 "num_base_bdevs_operational": 4, 00:10:33.575 "base_bdevs_list": [ 00:10:33.575 { 00:10:33.575 "name": "BaseBdev1", 00:10:33.575 "uuid": "7ec75b1a-ecbb-5723-8e05-ce6ba92dbe73", 00:10:33.575 "is_configured": true, 00:10:33.575 "data_offset": 2048, 00:10:33.575 "data_size": 63488 00:10:33.575 }, 00:10:33.575 { 00:10:33.575 "name": "BaseBdev2", 00:10:33.575 "uuid": "58f95892-a1c5-5ace-99f9-a6c641852f63", 00:10:33.575 "is_configured": true, 00:10:33.575 "data_offset": 2048, 00:10:33.575 "data_size": 63488 00:10:33.575 }, 00:10:33.575 { 00:10:33.575 "name": "BaseBdev3", 00:10:33.575 "uuid": "e90f257b-6f05-524c-8717-73cb075a48a2", 00:10:33.575 "is_configured": true, 00:10:33.575 "data_offset": 2048, 00:10:33.575 "data_size": 63488 00:10:33.575 }, 00:10:33.575 { 00:10:33.575 "name": "BaseBdev4", 00:10:33.575 "uuid": "b8875598-f1a3-5547-a2c9-fb471dd3b6c5", 00:10:33.575 "is_configured": true, 00:10:33.575 "data_offset": 2048, 00:10:33.575 "data_size": 63488 00:10:33.575 } 00:10:33.575 ] 00:10:33.575 }' 00:10:33.575 01:54:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.575 01:54:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.834 01:54:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:33.834 01:54:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:33.834 [2024-12-07 01:54:39.292528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.771 [2024-12-07 01:54:40.207351] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:34.771 [2024-12-07 01:54:40.207462] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.771 [2024-12-07 01:54:40.207729] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.771 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.030 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.030 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.030 "name": "raid_bdev1", 00:10:35.030 "uuid": "41ec1b88-10f6-4c92-bffc-b4fa4ef86d12", 00:10:35.030 "strip_size_kb": 0, 00:10:35.030 "state": "online", 00:10:35.030 "raid_level": "raid1", 00:10:35.030 "superblock": true, 00:10:35.030 "num_base_bdevs": 4, 00:10:35.030 "num_base_bdevs_discovered": 3, 00:10:35.030 "num_base_bdevs_operational": 3, 00:10:35.030 "base_bdevs_list": [ 00:10:35.030 { 00:10:35.030 "name": null, 00:10:35.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.031 "is_configured": false, 00:10:35.031 "data_offset": 0, 00:10:35.031 "data_size": 63488 00:10:35.031 }, 00:10:35.031 { 00:10:35.031 "name": "BaseBdev2", 00:10:35.031 "uuid": "58f95892-a1c5-5ace-99f9-a6c641852f63", 00:10:35.031 "is_configured": true, 00:10:35.031 "data_offset": 2048, 00:10:35.031 "data_size": 63488 00:10:35.031 }, 00:10:35.031 { 00:10:35.031 "name": "BaseBdev3", 00:10:35.031 "uuid": "e90f257b-6f05-524c-8717-73cb075a48a2", 00:10:35.031 "is_configured": true, 00:10:35.031 "data_offset": 2048, 00:10:35.031 "data_size": 63488 00:10:35.031 }, 00:10:35.031 { 00:10:35.031 "name": "BaseBdev4", 00:10:35.031 "uuid": "b8875598-f1a3-5547-a2c9-fb471dd3b6c5", 00:10:35.031 "is_configured": true, 00:10:35.031 "data_offset": 2048, 00:10:35.031 "data_size": 63488 00:10:35.031 } 00:10:35.031 ] 00:10:35.031 }' 00:10:35.031 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.031 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.290 [2024-12-07 01:54:40.642207] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:35.290 [2024-12-07 01:54:40.642304] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.290 [2024-12-07 01:54:40.644811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.290 [2024-12-07 01:54:40.644902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.290 [2024-12-07 01:54:40.645018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.290 [2024-12-07 01:54:40.645069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:35.290 { 00:10:35.290 "results": [ 00:10:35.290 { 00:10:35.290 "job": "raid_bdev1", 00:10:35.290 "core_mask": "0x1", 00:10:35.290 "workload": "randrw", 00:10:35.290 "percentage": 50, 00:10:35.290 "status": "finished", 00:10:35.290 "queue_depth": 1, 00:10:35.290 "io_size": 131072, 00:10:35.290 "runtime": 1.350439, 00:10:35.290 "iops": 12498.15800639644, 00:10:35.290 "mibps": 1562.269750799555, 00:10:35.290 "io_failed": 0, 00:10:35.290 "io_timeout": 0, 00:10:35.290 "avg_latency_us": 77.39500665189847, 00:10:35.290 "min_latency_us": 22.022707423580787, 00:10:35.290 "max_latency_us": 1373.6803493449781 00:10:35.290 } 00:10:35.290 ], 00:10:35.290 "core_count": 1 00:10:35.290 } 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85601 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 85601 ']' 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 85601 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85601 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85601' 00:10:35.290 killing process with pid 85601 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 85601 00:10:35.290 [2024-12-07 01:54:40.693240] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.290 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 85601 00:10:35.290 [2024-12-07 01:54:40.728656] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YEeE0fjTuY 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:35.550 ************************************ 00:10:35.550 END TEST raid_write_error_test 00:10:35.550 ************************************ 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:35.550 00:10:35.550 real 0m3.339s 00:10:35.550 user 0m4.219s 00:10:35.550 sys 0m0.526s 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.550 01:54:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.810 01:54:41 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:35.810 01:54:41 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:35.810 01:54:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:35.810 01:54:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:35.810 01:54:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.810 01:54:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:35.810 ************************************ 00:10:35.810 START TEST raid_rebuild_test 00:10:35.810 ************************************ 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85728 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85728 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 85728 ']' 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.810 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.810 [2024-12-07 01:54:41.135277] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:35.810 [2024-12-07 01:54:41.135466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:10:35.810 Zero copy mechanism will not be used. 00:10:35.810 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85728 ] 00:10:36.069 [2024-12-07 01:54:41.279839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.069 [2024-12-07 01:54:41.325513] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.069 [2024-12-07 01:54:41.366770] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.069 [2024-12-07 01:54:41.366803] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 BaseBdev1_malloc 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 [2024-12-07 01:54:41.992427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:36.637 [2024-12-07 01:54:41.992557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.637 [2024-12-07 01:54:41.992603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:36.637 [2024-12-07 01:54:41.992639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.637 [2024-12-07 01:54:41.994719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.637 [2024-12-07 01:54:41.994788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:36.637 BaseBdev1 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 BaseBdev2_malloc 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 [2024-12-07 01:54:42.031502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:36.637 [2024-12-07 01:54:42.031610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.637 [2024-12-07 01:54:42.031641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:36.637 [2024-12-07 01:54:42.031652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.637 [2024-12-07 01:54:42.033939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.637 [2024-12-07 01:54:42.033971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:36.637 BaseBdev2 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 spare_malloc 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 spare_delay 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 [2024-12-07 01:54:42.071827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:36.637 [2024-12-07 01:54:42.071879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:36.637 [2024-12-07 01:54:42.071900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:36.637 [2024-12-07 01:54:42.071909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:36.637 [2024-12-07 01:54:42.073963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:36.637 [2024-12-07 01:54:42.074035] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:36.637 spare 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.637 [2024-12-07 01:54:42.083851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.637 [2024-12-07 01:54:42.085668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.637 [2024-12-07 01:54:42.085767] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:36.637 [2024-12-07 01:54:42.085780] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:36.637 [2024-12-07 01:54:42.086039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:36.637 [2024-12-07 01:54:42.086175] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:36.637 [2024-12-07 01:54:42.086192] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:36.637 [2024-12-07 01:54:42.086317] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.637 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.896 "name": "raid_bdev1", 00:10:36.896 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:36.896 "strip_size_kb": 0, 00:10:36.896 "state": "online", 00:10:36.896 "raid_level": "raid1", 00:10:36.896 "superblock": false, 00:10:36.896 "num_base_bdevs": 2, 00:10:36.896 "num_base_bdevs_discovered": 2, 00:10:36.896 "num_base_bdevs_operational": 2, 00:10:36.896 "base_bdevs_list": [ 00:10:36.896 { 00:10:36.896 "name": "BaseBdev1", 00:10:36.896 "uuid": "bb0c491f-e8f5-58a3-9957-6916d025c57f", 00:10:36.896 "is_configured": true, 00:10:36.896 "data_offset": 0, 00:10:36.896 "data_size": 65536 00:10:36.896 }, 00:10:36.896 { 00:10:36.896 "name": "BaseBdev2", 00:10:36.896 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:36.896 "is_configured": true, 00:10:36.896 "data_offset": 0, 00:10:36.896 "data_size": 65536 00:10:36.896 } 00:10:36.896 ] 00:10:36.896 }' 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.896 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 [2024-12-07 01:54:42.551371] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:37.414 [2024-12-07 01:54:42.810721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:37.414 /dev/nbd0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.414 1+0 records in 00:10:37.414 1+0 records out 00:10:37.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235616 s, 17.4 MB/s 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:37.414 01:54:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:41.596 65536+0 records in 00:10:41.596 65536+0 records out 00:10:41.596 33554432 bytes (34 MB, 32 MiB) copied, 3.85001 s, 8.7 MB/s 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:41.596 [2024-12-07 01:54:46.915694] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.596 [2024-12-07 01:54:46.947713] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.596 "name": "raid_bdev1", 00:10:41.596 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:41.596 "strip_size_kb": 0, 00:10:41.596 "state": "online", 00:10:41.596 "raid_level": "raid1", 00:10:41.596 "superblock": false, 00:10:41.596 "num_base_bdevs": 2, 00:10:41.596 "num_base_bdevs_discovered": 1, 00:10:41.596 "num_base_bdevs_operational": 1, 00:10:41.596 "base_bdevs_list": [ 00:10:41.596 { 00:10:41.596 "name": null, 00:10:41.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.596 "is_configured": false, 00:10:41.596 "data_offset": 0, 00:10:41.596 "data_size": 65536 00:10:41.596 }, 00:10:41.596 { 00:10:41.596 "name": "BaseBdev2", 00:10:41.596 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:41.596 "is_configured": true, 00:10:41.596 "data_offset": 0, 00:10:41.596 "data_size": 65536 00:10:41.596 } 00:10:41.596 ] 00:10:41.596 }' 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.596 01:54:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.180 01:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:42.180 01:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.180 01:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.180 [2024-12-07 01:54:47.442916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:42.180 [2024-12-07 01:54:47.447232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:10:42.180 01:54:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.180 01:54:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:42.180 [2024-12-07 01:54:47.449587] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:43.119 "name": "raid_bdev1", 00:10:43.119 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:43.119 "strip_size_kb": 0, 00:10:43.119 "state": "online", 00:10:43.119 "raid_level": "raid1", 00:10:43.119 "superblock": false, 00:10:43.119 "num_base_bdevs": 2, 00:10:43.119 "num_base_bdevs_discovered": 2, 00:10:43.119 "num_base_bdevs_operational": 2, 00:10:43.119 "process": { 00:10:43.119 "type": "rebuild", 00:10:43.119 "target": "spare", 00:10:43.119 "progress": { 00:10:43.119 "blocks": 20480, 00:10:43.119 "percent": 31 00:10:43.119 } 00:10:43.119 }, 00:10:43.119 "base_bdevs_list": [ 00:10:43.119 { 00:10:43.119 "name": "spare", 00:10:43.119 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:43.119 "is_configured": true, 00:10:43.119 "data_offset": 0, 00:10:43.119 "data_size": 65536 00:10:43.119 }, 00:10:43.119 { 00:10:43.119 "name": "BaseBdev2", 00:10:43.119 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:43.119 "is_configured": true, 00:10:43.119 "data_offset": 0, 00:10:43.119 "data_size": 65536 00:10:43.119 } 00:10:43.119 ] 00:10:43.119 }' 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:43.119 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:43.120 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:43.120 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:43.120 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.120 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.120 [2024-12-07 01:54:48.570136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:43.378 [2024-12-07 01:54:48.654619] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:43.378 [2024-12-07 01:54:48.654762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.378 [2024-12-07 01:54:48.654808] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:43.378 [2024-12-07 01:54:48.654837] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.378 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.379 "name": "raid_bdev1", 00:10:43.379 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:43.379 "strip_size_kb": 0, 00:10:43.379 "state": "online", 00:10:43.379 "raid_level": "raid1", 00:10:43.379 "superblock": false, 00:10:43.379 "num_base_bdevs": 2, 00:10:43.379 "num_base_bdevs_discovered": 1, 00:10:43.379 "num_base_bdevs_operational": 1, 00:10:43.379 "base_bdevs_list": [ 00:10:43.379 { 00:10:43.379 "name": null, 00:10:43.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.379 "is_configured": false, 00:10:43.379 "data_offset": 0, 00:10:43.379 "data_size": 65536 00:10:43.379 }, 00:10:43.379 { 00:10:43.379 "name": "BaseBdev2", 00:10:43.379 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:43.379 "is_configured": true, 00:10:43.379 "data_offset": 0, 00:10:43.379 "data_size": 65536 00:10:43.379 } 00:10:43.379 ] 00:10:43.379 }' 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.379 01:54:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.637 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:43.637 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:43.896 "name": "raid_bdev1", 00:10:43.896 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:43.896 "strip_size_kb": 0, 00:10:43.896 "state": "online", 00:10:43.896 "raid_level": "raid1", 00:10:43.896 "superblock": false, 00:10:43.896 "num_base_bdevs": 2, 00:10:43.896 "num_base_bdevs_discovered": 1, 00:10:43.896 "num_base_bdevs_operational": 1, 00:10:43.896 "base_bdevs_list": [ 00:10:43.896 { 00:10:43.896 "name": null, 00:10:43.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.896 "is_configured": false, 00:10:43.896 "data_offset": 0, 00:10:43.896 "data_size": 65536 00:10:43.896 }, 00:10:43.896 { 00:10:43.896 "name": "BaseBdev2", 00:10:43.896 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:43.896 "is_configured": true, 00:10:43.896 "data_offset": 0, 00:10:43.896 "data_size": 65536 00:10:43.896 } 00:10:43.896 ] 00:10:43.896 }' 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.896 [2024-12-07 01:54:49.202597] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:43.896 [2024-12-07 01:54:49.206697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.896 01:54:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:43.896 [2024-12-07 01:54:49.208621] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:44.833 "name": "raid_bdev1", 00:10:44.833 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:44.833 "strip_size_kb": 0, 00:10:44.833 "state": "online", 00:10:44.833 "raid_level": "raid1", 00:10:44.833 "superblock": false, 00:10:44.833 "num_base_bdevs": 2, 00:10:44.833 "num_base_bdevs_discovered": 2, 00:10:44.833 "num_base_bdevs_operational": 2, 00:10:44.833 "process": { 00:10:44.833 "type": "rebuild", 00:10:44.833 "target": "spare", 00:10:44.833 "progress": { 00:10:44.833 "blocks": 20480, 00:10:44.833 "percent": 31 00:10:44.833 } 00:10:44.833 }, 00:10:44.833 "base_bdevs_list": [ 00:10:44.833 { 00:10:44.833 "name": "spare", 00:10:44.833 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:44.833 "is_configured": true, 00:10:44.833 "data_offset": 0, 00:10:44.833 "data_size": 65536 00:10:44.833 }, 00:10:44.833 { 00:10:44.833 "name": "BaseBdev2", 00:10:44.833 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:44.833 "is_configured": true, 00:10:44.833 "data_offset": 0, 00:10:44.833 "data_size": 65536 00:10:44.833 } 00:10:44.833 ] 00:10:44.833 }' 00:10:44.833 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=288 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:45.092 "name": "raid_bdev1", 00:10:45.092 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:45.092 "strip_size_kb": 0, 00:10:45.092 "state": "online", 00:10:45.092 "raid_level": "raid1", 00:10:45.092 "superblock": false, 00:10:45.092 "num_base_bdevs": 2, 00:10:45.092 "num_base_bdevs_discovered": 2, 00:10:45.092 "num_base_bdevs_operational": 2, 00:10:45.092 "process": { 00:10:45.092 "type": "rebuild", 00:10:45.092 "target": "spare", 00:10:45.092 "progress": { 00:10:45.092 "blocks": 22528, 00:10:45.092 "percent": 34 00:10:45.092 } 00:10:45.092 }, 00:10:45.092 "base_bdevs_list": [ 00:10:45.092 { 00:10:45.092 "name": "spare", 00:10:45.092 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:45.092 "is_configured": true, 00:10:45.092 "data_offset": 0, 00:10:45.092 "data_size": 65536 00:10:45.092 }, 00:10:45.092 { 00:10:45.092 "name": "BaseBdev2", 00:10:45.092 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:45.092 "is_configured": true, 00:10:45.092 "data_offset": 0, 00:10:45.092 "data_size": 65536 00:10:45.092 } 00:10:45.092 ] 00:10:45.092 }' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:45.092 01:54:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.027 01:54:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.285 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.285 "name": "raid_bdev1", 00:10:46.285 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:46.285 "strip_size_kb": 0, 00:10:46.285 "state": "online", 00:10:46.285 "raid_level": "raid1", 00:10:46.285 "superblock": false, 00:10:46.285 "num_base_bdevs": 2, 00:10:46.285 "num_base_bdevs_discovered": 2, 00:10:46.285 "num_base_bdevs_operational": 2, 00:10:46.285 "process": { 00:10:46.285 "type": "rebuild", 00:10:46.285 "target": "spare", 00:10:46.285 "progress": { 00:10:46.285 "blocks": 45056, 00:10:46.285 "percent": 68 00:10:46.285 } 00:10:46.285 }, 00:10:46.285 "base_bdevs_list": [ 00:10:46.285 { 00:10:46.285 "name": "spare", 00:10:46.285 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:46.285 "is_configured": true, 00:10:46.285 "data_offset": 0, 00:10:46.285 "data_size": 65536 00:10:46.285 }, 00:10:46.285 { 00:10:46.285 "name": "BaseBdev2", 00:10:46.285 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:46.285 "is_configured": true, 00:10:46.285 "data_offset": 0, 00:10:46.285 "data_size": 65536 00:10:46.285 } 00:10:46.285 ] 00:10:46.285 }' 00:10:46.285 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.285 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.285 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.285 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.285 01:54:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:47.222 [2024-12-07 01:54:52.421532] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:47.222 [2024-12-07 01:54:52.421642] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:47.222 [2024-12-07 01:54:52.421703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:47.222 "name": "raid_bdev1", 00:10:47.222 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:47.222 "strip_size_kb": 0, 00:10:47.222 "state": "online", 00:10:47.222 "raid_level": "raid1", 00:10:47.222 "superblock": false, 00:10:47.222 "num_base_bdevs": 2, 00:10:47.222 "num_base_bdevs_discovered": 2, 00:10:47.222 "num_base_bdevs_operational": 2, 00:10:47.222 "base_bdevs_list": [ 00:10:47.222 { 00:10:47.222 "name": "spare", 00:10:47.222 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:47.222 "is_configured": true, 00:10:47.222 "data_offset": 0, 00:10:47.222 "data_size": 65536 00:10:47.222 }, 00:10:47.222 { 00:10:47.222 "name": "BaseBdev2", 00:10:47.222 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:47.222 "is_configured": true, 00:10:47.222 "data_offset": 0, 00:10:47.222 "data_size": 65536 00:10:47.222 } 00:10:47.222 ] 00:10:47.222 }' 00:10:47.222 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.481 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:47.481 "name": "raid_bdev1", 00:10:47.481 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:47.481 "strip_size_kb": 0, 00:10:47.481 "state": "online", 00:10:47.481 "raid_level": "raid1", 00:10:47.481 "superblock": false, 00:10:47.481 "num_base_bdevs": 2, 00:10:47.481 "num_base_bdevs_discovered": 2, 00:10:47.481 "num_base_bdevs_operational": 2, 00:10:47.481 "base_bdevs_list": [ 00:10:47.481 { 00:10:47.481 "name": "spare", 00:10:47.481 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:47.481 "is_configured": true, 00:10:47.481 "data_offset": 0, 00:10:47.481 "data_size": 65536 00:10:47.481 }, 00:10:47.481 { 00:10:47.481 "name": "BaseBdev2", 00:10:47.481 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:47.481 "is_configured": true, 00:10:47.481 "data_offset": 0, 00:10:47.482 "data_size": 65536 00:10:47.482 } 00:10:47.482 ] 00:10:47.482 }' 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.482 "name": "raid_bdev1", 00:10:47.482 "uuid": "2b4e8904-fe42-491a-b4f4-6b74005335bb", 00:10:47.482 "strip_size_kb": 0, 00:10:47.482 "state": "online", 00:10:47.482 "raid_level": "raid1", 00:10:47.482 "superblock": false, 00:10:47.482 "num_base_bdevs": 2, 00:10:47.482 "num_base_bdevs_discovered": 2, 00:10:47.482 "num_base_bdevs_operational": 2, 00:10:47.482 "base_bdevs_list": [ 00:10:47.482 { 00:10:47.482 "name": "spare", 00:10:47.482 "uuid": "20436056-8178-5271-bde6-74f828b14c7b", 00:10:47.482 "is_configured": true, 00:10:47.482 "data_offset": 0, 00:10:47.482 "data_size": 65536 00:10:47.482 }, 00:10:47.482 { 00:10:47.482 "name": "BaseBdev2", 00:10:47.482 "uuid": "1d184e1f-3637-5440-ae51-7348a22af766", 00:10:47.482 "is_configured": true, 00:10:47.482 "data_offset": 0, 00:10:47.482 "data_size": 65536 00:10:47.482 } 00:10:47.482 ] 00:10:47.482 }' 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.482 01:54:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.048 [2024-12-07 01:54:53.240725] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.048 [2024-12-07 01:54:53.240758] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.048 [2024-12-07 01:54:53.240859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.048 [2024-12-07 01:54:53.240928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.048 [2024-12-07 01:54:53.240940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:48.048 /dev/nbd0 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:48.048 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:48.049 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:48.049 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:48.049 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.049 1+0 records in 00:10:48.049 1+0 records out 00:10:48.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344026 s, 11.9 MB/s 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:48.307 /dev/nbd1 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:48.307 1+0 records in 00:10:48.307 1+0 records out 00:10:48.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000254097 s, 16.1 MB/s 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:48.307 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.566 01:54:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85728 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 85728 ']' 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 85728 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.825 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85728 00:10:49.083 killing process with pid 85728 00:10:49.083 Received shutdown signal, test time was about 60.000000 seconds 00:10:49.083 00:10:49.083 Latency(us) 00:10:49.083 [2024-12-07T01:54:54.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:49.083 [2024-12-07T01:54:54.545Z] =================================================================================================================== 00:10:49.083 [2024-12-07T01:54:54.545Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:49.083 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.083 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.083 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85728' 00:10:49.083 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 85728 00:10:49.083 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 85728 00:10:49.083 [2024-12-07 01:54:54.302544] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.083 [2024-12-07 01:54:54.333122] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.341 01:54:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:49.341 00:10:49.341 real 0m13.515s 00:10:49.341 user 0m15.564s 00:10:49.341 sys 0m2.712s 00:10:49.341 ************************************ 00:10:49.342 END TEST raid_rebuild_test 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.342 ************************************ 00:10:49.342 01:54:54 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:49.342 01:54:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:10:49.342 01:54:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:49.342 01:54:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.342 ************************************ 00:10:49.342 START TEST raid_rebuild_test_sb 00:10:49.342 ************************************ 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:49.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86129 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86129 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 86129 ']' 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.342 01:54:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:49.342 [2024-12-07 01:54:54.728242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:49.342 [2024-12-07 01:54:54.728500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86129 ] 00:10:49.342 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:49.342 Zero copy mechanism will not be used. 00:10:49.601 [2024-12-07 01:54:54.865292] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.601 [2024-12-07 01:54:54.911263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.601 [2024-12-07 01:54:54.953530] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.601 [2024-12-07 01:54:54.953658] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.166 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:50.166 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:50.166 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:50.166 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:50.166 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.167 BaseBdev1_malloc 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.167 [2024-12-07 01:54:55.567593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:50.167 [2024-12-07 01:54:55.567722] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.167 [2024-12-07 01:54:55.567791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:50.167 [2024-12-07 01:54:55.567842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.167 [2024-12-07 01:54:55.570038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.167 [2024-12-07 01:54:55.570104] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:50.167 BaseBdev1 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.167 BaseBdev2_malloc 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.167 [2024-12-07 01:54:55.603607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:50.167 [2024-12-07 01:54:55.603707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.167 [2024-12-07 01:54:55.603748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:50.167 [2024-12-07 01:54:55.603787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.167 [2024-12-07 01:54:55.605890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.167 [2024-12-07 01:54:55.605959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:50.167 BaseBdev2 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.167 spare_malloc 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.167 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.425 spare_delay 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.425 [2024-12-07 01:54:55.640212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:50.425 [2024-12-07 01:54:55.640265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.425 [2024-12-07 01:54:55.640287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:50.425 [2024-12-07 01:54:55.640295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.425 [2024-12-07 01:54:55.642415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.425 [2024-12-07 01:54:55.642449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:50.425 spare 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.425 [2024-12-07 01:54:55.648264] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.425 [2024-12-07 01:54:55.650065] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.425 [2024-12-07 01:54:55.650217] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:50.425 [2024-12-07 01:54:55.650229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:50.425 [2024-12-07 01:54:55.650486] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:50.425 [2024-12-07 01:54:55.650612] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:50.425 [2024-12-07 01:54:55.650625] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:50.425 [2024-12-07 01:54:55.650803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.425 "name": "raid_bdev1", 00:10:50.425 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:50.425 "strip_size_kb": 0, 00:10:50.425 "state": "online", 00:10:50.425 "raid_level": "raid1", 00:10:50.425 "superblock": true, 00:10:50.425 "num_base_bdevs": 2, 00:10:50.425 "num_base_bdevs_discovered": 2, 00:10:50.425 "num_base_bdevs_operational": 2, 00:10:50.425 "base_bdevs_list": [ 00:10:50.425 { 00:10:50.425 "name": "BaseBdev1", 00:10:50.425 "uuid": "536183cd-f57e-5a1f-aa59-477908db400a", 00:10:50.425 "is_configured": true, 00:10:50.425 "data_offset": 2048, 00:10:50.425 "data_size": 63488 00:10:50.425 }, 00:10:50.425 { 00:10:50.425 "name": "BaseBdev2", 00:10:50.425 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:50.425 "is_configured": true, 00:10:50.425 "data_offset": 2048, 00:10:50.425 "data_size": 63488 00:10:50.425 } 00:10:50.425 ] 00:10:50.425 }' 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.425 01:54:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.684 [2024-12-07 01:54:56.087859] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.684 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:50.943 [2024-12-07 01:54:56.359171] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:50.943 /dev/nbd0 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:10:50.943 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:51.202 1+0 records in 00:10:51.202 1+0 records out 00:10:51.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400896 s, 10.2 MB/s 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:51.202 01:54:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:55.397 63488+0 records in 00:10:55.397 63488+0 records out 00:10:55.397 32505856 bytes (33 MB, 31 MiB) copied, 4.37552 s, 7.4 MB/s 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:55.397 01:55:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:55.672 [2024-12-07 01:55:01.019415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.672 [2024-12-07 01:55:01.031492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.672 "name": "raid_bdev1", 00:10:55.672 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:55.672 "strip_size_kb": 0, 00:10:55.672 "state": "online", 00:10:55.672 "raid_level": "raid1", 00:10:55.672 "superblock": true, 00:10:55.672 "num_base_bdevs": 2, 00:10:55.672 "num_base_bdevs_discovered": 1, 00:10:55.672 "num_base_bdevs_operational": 1, 00:10:55.672 "base_bdevs_list": [ 00:10:55.672 { 00:10:55.672 "name": null, 00:10:55.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.672 "is_configured": false, 00:10:55.672 "data_offset": 0, 00:10:55.672 "data_size": 63488 00:10:55.672 }, 00:10:55.672 { 00:10:55.672 "name": "BaseBdev2", 00:10:55.672 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:55.672 "is_configured": true, 00:10:55.672 "data_offset": 2048, 00:10:55.672 "data_size": 63488 00:10:55.672 } 00:10:55.672 ] 00:10:55.672 }' 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.672 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.254 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:56.254 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.254 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.254 [2024-12-07 01:55:01.510737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:56.254 [2024-12-07 01:55:01.514919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:10:56.254 01:55:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.254 01:55:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:56.254 [2024-12-07 01:55:01.516849] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:57.191 "name": "raid_bdev1", 00:10:57.191 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:57.191 "strip_size_kb": 0, 00:10:57.191 "state": "online", 00:10:57.191 "raid_level": "raid1", 00:10:57.191 "superblock": true, 00:10:57.191 "num_base_bdevs": 2, 00:10:57.191 "num_base_bdevs_discovered": 2, 00:10:57.191 "num_base_bdevs_operational": 2, 00:10:57.191 "process": { 00:10:57.191 "type": "rebuild", 00:10:57.191 "target": "spare", 00:10:57.191 "progress": { 00:10:57.191 "blocks": 20480, 00:10:57.191 "percent": 32 00:10:57.191 } 00:10:57.191 }, 00:10:57.191 "base_bdevs_list": [ 00:10:57.191 { 00:10:57.191 "name": "spare", 00:10:57.191 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:10:57.191 "is_configured": true, 00:10:57.191 "data_offset": 2048, 00:10:57.191 "data_size": 63488 00:10:57.191 }, 00:10:57.191 { 00:10:57.191 "name": "BaseBdev2", 00:10:57.191 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:57.191 "is_configured": true, 00:10:57.191 "data_offset": 2048, 00:10:57.191 "data_size": 63488 00:10:57.191 } 00:10:57.191 ] 00:10:57.191 }' 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:57.191 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.450 [2024-12-07 01:55:02.665811] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.450 [2024-12-07 01:55:02.722237] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:57.450 [2024-12-07 01:55:02.722298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.450 [2024-12-07 01:55:02.722318] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:57.450 [2024-12-07 01:55:02.722326] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.450 "name": "raid_bdev1", 00:10:57.450 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:57.450 "strip_size_kb": 0, 00:10:57.450 "state": "online", 00:10:57.450 "raid_level": "raid1", 00:10:57.450 "superblock": true, 00:10:57.450 "num_base_bdevs": 2, 00:10:57.450 "num_base_bdevs_discovered": 1, 00:10:57.450 "num_base_bdevs_operational": 1, 00:10:57.450 "base_bdevs_list": [ 00:10:57.450 { 00:10:57.450 "name": null, 00:10:57.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.450 "is_configured": false, 00:10:57.450 "data_offset": 0, 00:10:57.450 "data_size": 63488 00:10:57.450 }, 00:10:57.450 { 00:10:57.450 "name": "BaseBdev2", 00:10:57.450 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:57.450 "is_configured": true, 00:10:57.450 "data_offset": 2048, 00:10:57.450 "data_size": 63488 00:10:57.450 } 00:10:57.450 ] 00:10:57.450 }' 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.450 01:55:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.018 "name": "raid_bdev1", 00:10:58.018 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:58.018 "strip_size_kb": 0, 00:10:58.018 "state": "online", 00:10:58.018 "raid_level": "raid1", 00:10:58.018 "superblock": true, 00:10:58.018 "num_base_bdevs": 2, 00:10:58.018 "num_base_bdevs_discovered": 1, 00:10:58.018 "num_base_bdevs_operational": 1, 00:10:58.018 "base_bdevs_list": [ 00:10:58.018 { 00:10:58.018 "name": null, 00:10:58.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.018 "is_configured": false, 00:10:58.018 "data_offset": 0, 00:10:58.018 "data_size": 63488 00:10:58.018 }, 00:10:58.018 { 00:10:58.018 "name": "BaseBdev2", 00:10:58.018 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:58.018 "is_configured": true, 00:10:58.018 "data_offset": 2048, 00:10:58.018 "data_size": 63488 00:10:58.018 } 00:10:58.018 ] 00:10:58.018 }' 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.018 [2024-12-07 01:55:03.329933] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:58.018 [2024-12-07 01:55:03.334068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.018 01:55:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:58.018 [2024-12-07 01:55:03.336017] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:58.970 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.971 "name": "raid_bdev1", 00:10:58.971 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:58.971 "strip_size_kb": 0, 00:10:58.971 "state": "online", 00:10:58.971 "raid_level": "raid1", 00:10:58.971 "superblock": true, 00:10:58.971 "num_base_bdevs": 2, 00:10:58.971 "num_base_bdevs_discovered": 2, 00:10:58.971 "num_base_bdevs_operational": 2, 00:10:58.971 "process": { 00:10:58.971 "type": "rebuild", 00:10:58.971 "target": "spare", 00:10:58.971 "progress": { 00:10:58.971 "blocks": 20480, 00:10:58.971 "percent": 32 00:10:58.971 } 00:10:58.971 }, 00:10:58.971 "base_bdevs_list": [ 00:10:58.971 { 00:10:58.971 "name": "spare", 00:10:58.971 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:10:58.971 "is_configured": true, 00:10:58.971 "data_offset": 2048, 00:10:58.971 "data_size": 63488 00:10:58.971 }, 00:10:58.971 { 00:10:58.971 "name": "BaseBdev2", 00:10:58.971 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:58.971 "is_configured": true, 00:10:58.971 "data_offset": 2048, 00:10:58.971 "data_size": 63488 00:10:58.971 } 00:10:58.971 ] 00:10:58.971 }' 00:10:58.971 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:59.230 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=302 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:59.230 "name": "raid_bdev1", 00:10:59.230 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:10:59.230 "strip_size_kb": 0, 00:10:59.230 "state": "online", 00:10:59.230 "raid_level": "raid1", 00:10:59.230 "superblock": true, 00:10:59.230 "num_base_bdevs": 2, 00:10:59.230 "num_base_bdevs_discovered": 2, 00:10:59.230 "num_base_bdevs_operational": 2, 00:10:59.230 "process": { 00:10:59.230 "type": "rebuild", 00:10:59.230 "target": "spare", 00:10:59.230 "progress": { 00:10:59.230 "blocks": 22528, 00:10:59.230 "percent": 35 00:10:59.230 } 00:10:59.230 }, 00:10:59.230 "base_bdevs_list": [ 00:10:59.230 { 00:10:59.230 "name": "spare", 00:10:59.230 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:10:59.230 "is_configured": true, 00:10:59.230 "data_offset": 2048, 00:10:59.230 "data_size": 63488 00:10:59.230 }, 00:10:59.230 { 00:10:59.230 "name": "BaseBdev2", 00:10:59.230 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:10:59.230 "is_configured": true, 00:10:59.230 "data_offset": 2048, 00:10:59.230 "data_size": 63488 00:10:59.230 } 00:10:59.230 ] 00:10:59.230 }' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:59.230 01:55:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.164 "name": "raid_bdev1", 00:11:00.164 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:00.164 "strip_size_kb": 0, 00:11:00.164 "state": "online", 00:11:00.164 "raid_level": "raid1", 00:11:00.164 "superblock": true, 00:11:00.164 "num_base_bdevs": 2, 00:11:00.164 "num_base_bdevs_discovered": 2, 00:11:00.164 "num_base_bdevs_operational": 2, 00:11:00.164 "process": { 00:11:00.164 "type": "rebuild", 00:11:00.164 "target": "spare", 00:11:00.164 "progress": { 00:11:00.164 "blocks": 45056, 00:11:00.164 "percent": 70 00:11:00.164 } 00:11:00.164 }, 00:11:00.164 "base_bdevs_list": [ 00:11:00.164 { 00:11:00.164 "name": "spare", 00:11:00.164 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:00.164 "is_configured": true, 00:11:00.164 "data_offset": 2048, 00:11:00.164 "data_size": 63488 00:11:00.164 }, 00:11:00.164 { 00:11:00.164 "name": "BaseBdev2", 00:11:00.164 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:00.164 "is_configured": true, 00:11:00.164 "data_offset": 2048, 00:11:00.164 "data_size": 63488 00:11:00.164 } 00:11:00.164 ] 00:11:00.164 }' 00:11:00.164 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.422 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:00.422 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.422 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:00.422 01:55:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:00.988 [2024-12-07 01:55:06.448528] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:00.988 [2024-12-07 01:55:06.448614] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:00.988 [2024-12-07 01:55:06.448811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.246 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.505 "name": "raid_bdev1", 00:11:01.505 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:01.505 "strip_size_kb": 0, 00:11:01.505 "state": "online", 00:11:01.505 "raid_level": "raid1", 00:11:01.505 "superblock": true, 00:11:01.505 "num_base_bdevs": 2, 00:11:01.505 "num_base_bdevs_discovered": 2, 00:11:01.505 "num_base_bdevs_operational": 2, 00:11:01.505 "base_bdevs_list": [ 00:11:01.505 { 00:11:01.505 "name": "spare", 00:11:01.505 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:01.505 "is_configured": true, 00:11:01.505 "data_offset": 2048, 00:11:01.505 "data_size": 63488 00:11:01.505 }, 00:11:01.505 { 00:11:01.505 "name": "BaseBdev2", 00:11:01.505 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:01.505 "is_configured": true, 00:11:01.505 "data_offset": 2048, 00:11:01.505 "data_size": 63488 00:11:01.505 } 00:11:01.505 ] 00:11:01.505 }' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:01.505 "name": "raid_bdev1", 00:11:01.505 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:01.505 "strip_size_kb": 0, 00:11:01.505 "state": "online", 00:11:01.505 "raid_level": "raid1", 00:11:01.505 "superblock": true, 00:11:01.505 "num_base_bdevs": 2, 00:11:01.505 "num_base_bdevs_discovered": 2, 00:11:01.505 "num_base_bdevs_operational": 2, 00:11:01.505 "base_bdevs_list": [ 00:11:01.505 { 00:11:01.505 "name": "spare", 00:11:01.505 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:01.505 "is_configured": true, 00:11:01.505 "data_offset": 2048, 00:11:01.505 "data_size": 63488 00:11:01.505 }, 00:11:01.505 { 00:11:01.505 "name": "BaseBdev2", 00:11:01.505 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:01.505 "is_configured": true, 00:11:01.505 "data_offset": 2048, 00:11:01.505 "data_size": 63488 00:11:01.505 } 00:11:01.505 ] 00:11:01.505 }' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.505 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.764 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.764 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.764 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.764 01:55:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.764 01:55:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.764 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.764 "name": "raid_bdev1", 00:11:01.764 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:01.764 "strip_size_kb": 0, 00:11:01.764 "state": "online", 00:11:01.764 "raid_level": "raid1", 00:11:01.764 "superblock": true, 00:11:01.764 "num_base_bdevs": 2, 00:11:01.764 "num_base_bdevs_discovered": 2, 00:11:01.764 "num_base_bdevs_operational": 2, 00:11:01.764 "base_bdevs_list": [ 00:11:01.764 { 00:11:01.764 "name": "spare", 00:11:01.764 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:01.764 "is_configured": true, 00:11:01.764 "data_offset": 2048, 00:11:01.764 "data_size": 63488 00:11:01.764 }, 00:11:01.764 { 00:11:01.764 "name": "BaseBdev2", 00:11:01.764 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:01.764 "is_configured": true, 00:11:01.764 "data_offset": 2048, 00:11:01.764 "data_size": 63488 00:11:01.764 } 00:11:01.764 ] 00:11:01.764 }' 00:11:01.764 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.764 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.023 [2024-12-07 01:55:07.391793] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.023 [2024-12-07 01:55:07.391827] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.023 [2024-12-07 01:55:07.391916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.023 [2024-12-07 01:55:07.391989] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.023 [2024-12-07 01:55:07.392002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.023 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:02.282 /dev/nbd0 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.282 1+0 records in 00:11:02.282 1+0 records out 00:11:02.282 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223903 s, 18.3 MB/s 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.282 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:02.542 /dev/nbd1 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:02.542 1+0 records in 00:11:02.542 1+0 records out 00:11:02.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404495 s, 10.1 MB/s 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:02.542 01:55:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:02.801 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:02.802 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:02.802 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:02.802 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.061 [2024-12-07 01:55:08.507934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:03.061 [2024-12-07 01:55:08.507994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:03.061 [2024-12-07 01:55:08.508014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:03.061 [2024-12-07 01:55:08.508027] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:03.061 [2024-12-07 01:55:08.510268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:03.061 [2024-12-07 01:55:08.510307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:03.061 [2024-12-07 01:55:08.510390] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:03.061 [2024-12-07 01:55:08.510436] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:03.061 [2024-12-07 01:55:08.510554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.061 spare 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.061 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.320 [2024-12-07 01:55:08.610463] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:03.320 [2024-12-07 01:55:08.610500] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:03.320 [2024-12-07 01:55:08.610838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:11:03.320 [2024-12-07 01:55:08.611038] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:03.320 [2024-12-07 01:55:08.611065] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:03.320 [2024-12-07 01:55:08.611226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.320 "name": "raid_bdev1", 00:11:03.320 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:03.320 "strip_size_kb": 0, 00:11:03.320 "state": "online", 00:11:03.320 "raid_level": "raid1", 00:11:03.320 "superblock": true, 00:11:03.320 "num_base_bdevs": 2, 00:11:03.320 "num_base_bdevs_discovered": 2, 00:11:03.320 "num_base_bdevs_operational": 2, 00:11:03.320 "base_bdevs_list": [ 00:11:03.320 { 00:11:03.320 "name": "spare", 00:11:03.320 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:03.320 "is_configured": true, 00:11:03.320 "data_offset": 2048, 00:11:03.320 "data_size": 63488 00:11:03.320 }, 00:11:03.320 { 00:11:03.320 "name": "BaseBdev2", 00:11:03.320 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:03.320 "is_configured": true, 00:11:03.320 "data_offset": 2048, 00:11:03.320 "data_size": 63488 00:11:03.320 } 00:11:03.320 ] 00:11:03.320 }' 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.320 01:55:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.889 "name": "raid_bdev1", 00:11:03.889 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:03.889 "strip_size_kb": 0, 00:11:03.889 "state": "online", 00:11:03.889 "raid_level": "raid1", 00:11:03.889 "superblock": true, 00:11:03.889 "num_base_bdevs": 2, 00:11:03.889 "num_base_bdevs_discovered": 2, 00:11:03.889 "num_base_bdevs_operational": 2, 00:11:03.889 "base_bdevs_list": [ 00:11:03.889 { 00:11:03.889 "name": "spare", 00:11:03.889 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:03.889 "is_configured": true, 00:11:03.889 "data_offset": 2048, 00:11:03.889 "data_size": 63488 00:11:03.889 }, 00:11:03.889 { 00:11:03.889 "name": "BaseBdev2", 00:11:03.889 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:03.889 "is_configured": true, 00:11:03.889 "data_offset": 2048, 00:11:03.889 "data_size": 63488 00:11:03.889 } 00:11:03.889 ] 00:11:03.889 }' 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.889 [2024-12-07 01:55:09.298745] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.889 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.149 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.149 "name": "raid_bdev1", 00:11:04.149 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:04.149 "strip_size_kb": 0, 00:11:04.149 "state": "online", 00:11:04.149 "raid_level": "raid1", 00:11:04.149 "superblock": true, 00:11:04.149 "num_base_bdevs": 2, 00:11:04.149 "num_base_bdevs_discovered": 1, 00:11:04.149 "num_base_bdevs_operational": 1, 00:11:04.149 "base_bdevs_list": [ 00:11:04.149 { 00:11:04.149 "name": null, 00:11:04.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.149 "is_configured": false, 00:11:04.149 "data_offset": 0, 00:11:04.149 "data_size": 63488 00:11:04.149 }, 00:11:04.149 { 00:11:04.149 "name": "BaseBdev2", 00:11:04.149 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:04.149 "is_configured": true, 00:11:04.149 "data_offset": 2048, 00:11:04.149 "data_size": 63488 00:11:04.149 } 00:11:04.149 ] 00:11:04.149 }' 00:11:04.149 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.149 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:04.408 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.408 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.408 [2024-12-07 01:55:09.785903] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:04.408 [2024-12-07 01:55:09.786115] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:04.408 [2024-12-07 01:55:09.786131] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:04.408 [2024-12-07 01:55:09.786173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:04.408 [2024-12-07 01:55:09.790251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:11:04.408 01:55:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.408 01:55:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:04.408 [2024-12-07 01:55:09.792295] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:05.340 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.340 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.340 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.340 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.340 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.598 "name": "raid_bdev1", 00:11:05.598 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:05.598 "strip_size_kb": 0, 00:11:05.598 "state": "online", 00:11:05.598 "raid_level": "raid1", 00:11:05.598 "superblock": true, 00:11:05.598 "num_base_bdevs": 2, 00:11:05.598 "num_base_bdevs_discovered": 2, 00:11:05.598 "num_base_bdevs_operational": 2, 00:11:05.598 "process": { 00:11:05.598 "type": "rebuild", 00:11:05.598 "target": "spare", 00:11:05.598 "progress": { 00:11:05.598 "blocks": 20480, 00:11:05.598 "percent": 32 00:11:05.598 } 00:11:05.598 }, 00:11:05.598 "base_bdevs_list": [ 00:11:05.598 { 00:11:05.598 "name": "spare", 00:11:05.598 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:05.598 "is_configured": true, 00:11:05.598 "data_offset": 2048, 00:11:05.598 "data_size": 63488 00:11:05.598 }, 00:11:05.598 { 00:11:05.598 "name": "BaseBdev2", 00:11:05.598 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:05.598 "is_configured": true, 00:11:05.598 "data_offset": 2048, 00:11:05.598 "data_size": 63488 00:11:05.598 } 00:11:05.598 ] 00:11:05.598 }' 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.598 01:55:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.598 [2024-12-07 01:55:10.933134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:05.598 [2024-12-07 01:55:10.996929] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:05.598 [2024-12-07 01:55:10.996977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.598 [2024-12-07 01:55:10.996992] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:05.598 [2024-12-07 01:55:10.996999] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.598 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.599 "name": "raid_bdev1", 00:11:05.599 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:05.599 "strip_size_kb": 0, 00:11:05.599 "state": "online", 00:11:05.599 "raid_level": "raid1", 00:11:05.599 "superblock": true, 00:11:05.599 "num_base_bdevs": 2, 00:11:05.599 "num_base_bdevs_discovered": 1, 00:11:05.599 "num_base_bdevs_operational": 1, 00:11:05.599 "base_bdevs_list": [ 00:11:05.599 { 00:11:05.599 "name": null, 00:11:05.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.599 "is_configured": false, 00:11:05.599 "data_offset": 0, 00:11:05.599 "data_size": 63488 00:11:05.599 }, 00:11:05.599 { 00:11:05.599 "name": "BaseBdev2", 00:11:05.599 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:05.599 "is_configured": true, 00:11:05.599 "data_offset": 2048, 00:11:05.599 "data_size": 63488 00:11:05.599 } 00:11:05.599 ] 00:11:05.599 }' 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.599 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.163 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:06.163 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.163 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.163 [2024-12-07 01:55:11.472419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:06.163 [2024-12-07 01:55:11.472479] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.163 [2024-12-07 01:55:11.472508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:06.163 [2024-12-07 01:55:11.472518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.163 [2024-12-07 01:55:11.472946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.163 [2024-12-07 01:55:11.472975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:06.163 [2024-12-07 01:55:11.473062] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:06.163 [2024-12-07 01:55:11.473078] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:06.163 [2024-12-07 01:55:11.473109] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:06.163 [2024-12-07 01:55:11.473140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:06.163 spare 00:11:06.163 [2024-12-07 01:55:11.476917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:06.163 01:55:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.163 01:55:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:06.163 [2024-12-07 01:55:11.478744] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.101 "name": "raid_bdev1", 00:11:07.101 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:07.101 "strip_size_kb": 0, 00:11:07.101 "state": "online", 00:11:07.101 "raid_level": "raid1", 00:11:07.101 "superblock": true, 00:11:07.101 "num_base_bdevs": 2, 00:11:07.101 "num_base_bdevs_discovered": 2, 00:11:07.101 "num_base_bdevs_operational": 2, 00:11:07.101 "process": { 00:11:07.101 "type": "rebuild", 00:11:07.101 "target": "spare", 00:11:07.101 "progress": { 00:11:07.101 "blocks": 20480, 00:11:07.101 "percent": 32 00:11:07.101 } 00:11:07.101 }, 00:11:07.101 "base_bdevs_list": [ 00:11:07.101 { 00:11:07.101 "name": "spare", 00:11:07.101 "uuid": "abfb2d33-ad12-546f-9723-babdfa4568b9", 00:11:07.101 "is_configured": true, 00:11:07.101 "data_offset": 2048, 00:11:07.101 "data_size": 63488 00:11:07.101 }, 00:11:07.101 { 00:11:07.101 "name": "BaseBdev2", 00:11:07.101 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:07.101 "is_configured": true, 00:11:07.101 "data_offset": 2048, 00:11:07.101 "data_size": 63488 00:11:07.101 } 00:11:07.101 ] 00:11:07.101 }' 00:11:07.101 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.358 [2024-12-07 01:55:12.639617] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.358 [2024-12-07 01:55:12.682663] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:07.358 [2024-12-07 01:55:12.682732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:07.358 [2024-12-07 01:55:12.682747] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:07.358 [2024-12-07 01:55:12.682756] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.358 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.358 "name": "raid_bdev1", 00:11:07.358 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:07.359 "strip_size_kb": 0, 00:11:07.359 "state": "online", 00:11:07.359 "raid_level": "raid1", 00:11:07.359 "superblock": true, 00:11:07.359 "num_base_bdevs": 2, 00:11:07.359 "num_base_bdevs_discovered": 1, 00:11:07.359 "num_base_bdevs_operational": 1, 00:11:07.359 "base_bdevs_list": [ 00:11:07.359 { 00:11:07.359 "name": null, 00:11:07.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.359 "is_configured": false, 00:11:07.359 "data_offset": 0, 00:11:07.359 "data_size": 63488 00:11:07.359 }, 00:11:07.359 { 00:11:07.359 "name": "BaseBdev2", 00:11:07.359 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:07.359 "is_configured": true, 00:11:07.359 "data_offset": 2048, 00:11:07.359 "data_size": 63488 00:11:07.359 } 00:11:07.359 ] 00:11:07.359 }' 00:11:07.359 01:55:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.359 01:55:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.924 "name": "raid_bdev1", 00:11:07.924 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:07.924 "strip_size_kb": 0, 00:11:07.924 "state": "online", 00:11:07.924 "raid_level": "raid1", 00:11:07.924 "superblock": true, 00:11:07.924 "num_base_bdevs": 2, 00:11:07.924 "num_base_bdevs_discovered": 1, 00:11:07.924 "num_base_bdevs_operational": 1, 00:11:07.924 "base_bdevs_list": [ 00:11:07.924 { 00:11:07.924 "name": null, 00:11:07.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.924 "is_configured": false, 00:11:07.924 "data_offset": 0, 00:11:07.924 "data_size": 63488 00:11:07.924 }, 00:11:07.924 { 00:11:07.924 "name": "BaseBdev2", 00:11:07.924 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:07.924 "is_configured": true, 00:11:07.924 "data_offset": 2048, 00:11:07.924 "data_size": 63488 00:11:07.924 } 00:11:07.924 ] 00:11:07.924 }' 00:11:07.924 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.925 [2024-12-07 01:55:13.238116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:07.925 [2024-12-07 01:55:13.238169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.925 [2024-12-07 01:55:13.238188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:07.925 [2024-12-07 01:55:13.238199] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.925 [2024-12-07 01:55:13.238591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.925 [2024-12-07 01:55:13.238610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:07.925 [2024-12-07 01:55:13.238716] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:07.925 [2024-12-07 01:55:13.238735] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:07.925 [2024-12-07 01:55:13.238744] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:07.925 [2024-12-07 01:55:13.238755] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:07.925 BaseBdev1 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.925 01:55:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.858 "name": "raid_bdev1", 00:11:08.858 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:08.858 "strip_size_kb": 0, 00:11:08.858 "state": "online", 00:11:08.858 "raid_level": "raid1", 00:11:08.858 "superblock": true, 00:11:08.858 "num_base_bdevs": 2, 00:11:08.858 "num_base_bdevs_discovered": 1, 00:11:08.858 "num_base_bdevs_operational": 1, 00:11:08.858 "base_bdevs_list": [ 00:11:08.858 { 00:11:08.858 "name": null, 00:11:08.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.858 "is_configured": false, 00:11:08.858 "data_offset": 0, 00:11:08.858 "data_size": 63488 00:11:08.858 }, 00:11:08.858 { 00:11:08.858 "name": "BaseBdev2", 00:11:08.858 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:08.858 "is_configured": true, 00:11:08.858 "data_offset": 2048, 00:11:08.858 "data_size": 63488 00:11:08.858 } 00:11:08.858 ] 00:11:08.858 }' 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.858 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.424 "name": "raid_bdev1", 00:11:09.424 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:09.424 "strip_size_kb": 0, 00:11:09.424 "state": "online", 00:11:09.424 "raid_level": "raid1", 00:11:09.424 "superblock": true, 00:11:09.424 "num_base_bdevs": 2, 00:11:09.424 "num_base_bdevs_discovered": 1, 00:11:09.424 "num_base_bdevs_operational": 1, 00:11:09.424 "base_bdevs_list": [ 00:11:09.424 { 00:11:09.424 "name": null, 00:11:09.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.424 "is_configured": false, 00:11:09.424 "data_offset": 0, 00:11:09.424 "data_size": 63488 00:11:09.424 }, 00:11:09.424 { 00:11:09.424 "name": "BaseBdev2", 00:11:09.424 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:09.424 "is_configured": true, 00:11:09.424 "data_offset": 2048, 00:11:09.424 "data_size": 63488 00:11:09.424 } 00:11:09.424 ] 00:11:09.424 }' 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.424 [2024-12-07 01:55:14.835470] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.424 [2024-12-07 01:55:14.835649] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:09.424 [2024-12-07 01:55:14.835677] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:09.424 request: 00:11:09.424 { 00:11:09.424 "base_bdev": "BaseBdev1", 00:11:09.424 "raid_bdev": "raid_bdev1", 00:11:09.424 "method": "bdev_raid_add_base_bdev", 00:11:09.424 "req_id": 1 00:11:09.424 } 00:11:09.424 Got JSON-RPC error response 00:11:09.424 response: 00:11:09.424 { 00:11:09.424 "code": -22, 00:11:09.424 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:09.424 } 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:09.424 01:55:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.798 "name": "raid_bdev1", 00:11:10.798 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:10.798 "strip_size_kb": 0, 00:11:10.798 "state": "online", 00:11:10.798 "raid_level": "raid1", 00:11:10.798 "superblock": true, 00:11:10.798 "num_base_bdevs": 2, 00:11:10.798 "num_base_bdevs_discovered": 1, 00:11:10.798 "num_base_bdevs_operational": 1, 00:11:10.798 "base_bdevs_list": [ 00:11:10.798 { 00:11:10.798 "name": null, 00:11:10.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.798 "is_configured": false, 00:11:10.798 "data_offset": 0, 00:11:10.798 "data_size": 63488 00:11:10.798 }, 00:11:10.798 { 00:11:10.798 "name": "BaseBdev2", 00:11:10.798 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:10.798 "is_configured": true, 00:11:10.798 "data_offset": 2048, 00:11:10.798 "data_size": 63488 00:11:10.798 } 00:11:10.798 ] 00:11:10.798 }' 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.798 01:55:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.798 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:11.056 "name": "raid_bdev1", 00:11:11.056 "uuid": "f01df1a6-66ef-408c-a660-76cbb279f507", 00:11:11.056 "strip_size_kb": 0, 00:11:11.056 "state": "online", 00:11:11.056 "raid_level": "raid1", 00:11:11.056 "superblock": true, 00:11:11.056 "num_base_bdevs": 2, 00:11:11.056 "num_base_bdevs_discovered": 1, 00:11:11.056 "num_base_bdevs_operational": 1, 00:11:11.056 "base_bdevs_list": [ 00:11:11.056 { 00:11:11.056 "name": null, 00:11:11.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.056 "is_configured": false, 00:11:11.056 "data_offset": 0, 00:11:11.056 "data_size": 63488 00:11:11.056 }, 00:11:11.056 { 00:11:11.056 "name": "BaseBdev2", 00:11:11.056 "uuid": "e6059e1b-02ea-5be9-95f3-d6869d0639ef", 00:11:11.056 "is_configured": true, 00:11:11.056 "data_offset": 2048, 00:11:11.056 "data_size": 63488 00:11:11.056 } 00:11:11.056 ] 00:11:11.056 }' 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:11.056 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86129 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 86129 ']' 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 86129 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86129 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:11.057 killing process with pid 86129 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86129' 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 86129 00:11:11.057 Received shutdown signal, test time was about 60.000000 seconds 00:11:11.057 00:11:11.057 Latency(us) 00:11:11.057 [2024-12-07T01:55:16.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:11.057 [2024-12-07T01:55:16.519Z] =================================================================================================================== 00:11:11.057 [2024-12-07T01:55:16.519Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:11.057 [2024-12-07 01:55:16.418700] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.057 [2024-12-07 01:55:16.418826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.057 [2024-12-07 01:55:16.418888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.057 [2024-12-07 01:55:16.418899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:11.057 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 86129 00:11:11.057 [2024-12-07 01:55:16.449364] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:11.316 00:11:11.316 real 0m22.058s 00:11:11.316 user 0m26.868s 00:11:11.316 sys 0m3.934s 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.316 ************************************ 00:11:11.316 END TEST raid_rebuild_test_sb 00:11:11.316 ************************************ 00:11:11.316 01:55:16 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:11.316 01:55:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:11.316 01:55:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:11.316 01:55:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.316 ************************************ 00:11:11.316 START TEST raid_rebuild_test_io 00:11:11.316 ************************************ 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:11.316 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86851 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86851 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 86851 ']' 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.317 01:55:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.575 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:11.575 Zero copy mechanism will not be used. 00:11:11.575 [2024-12-07 01:55:16.841467] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:11.575 [2024-12-07 01:55:16.841580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86851 ] 00:11:11.575 [2024-12-07 01:55:16.984977] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.575 [2024-12-07 01:55:17.029861] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.833 [2024-12-07 01:55:17.071945] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.833 [2024-12-07 01:55:17.071984] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 BaseBdev1_malloc 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 [2024-12-07 01:55:17.681314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:12.400 [2024-12-07 01:55:17.681365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.400 [2024-12-07 01:55:17.681388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:12.400 [2024-12-07 01:55:17.681409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.400 [2024-12-07 01:55:17.683481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.400 [2024-12-07 01:55:17.683515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:12.400 BaseBdev1 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 BaseBdev2_malloc 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 [2024-12-07 01:55:17.717382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:12.400 [2024-12-07 01:55:17.717433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.400 [2024-12-07 01:55:17.717457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:12.400 [2024-12-07 01:55:17.717468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.400 [2024-12-07 01:55:17.719556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.400 [2024-12-07 01:55:17.719588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:12.400 BaseBdev2 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 spare_malloc 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 spare_delay 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 [2024-12-07 01:55:17.757797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:12.400 [2024-12-07 01:55:17.757848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.400 [2024-12-07 01:55:17.757869] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:12.400 [2024-12-07 01:55:17.757878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.400 [2024-12-07 01:55:17.759981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.400 [2024-12-07 01:55:17.760013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:12.400 spare 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.400 [2024-12-07 01:55:17.769677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.400 [2024-12-07 01:55:17.771497] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.400 [2024-12-07 01:55:17.771594] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:12.400 [2024-12-07 01:55:17.771608] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:12.400 [2024-12-07 01:55:17.771884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:12.400 [2024-12-07 01:55:17.772048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:12.400 [2024-12-07 01:55:17.772068] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:12.400 [2024-12-07 01:55:17.772191] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.400 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.401 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.401 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.401 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.401 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.401 "name": "raid_bdev1", 00:11:12.401 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:12.401 "strip_size_kb": 0, 00:11:12.401 "state": "online", 00:11:12.401 "raid_level": "raid1", 00:11:12.401 "superblock": false, 00:11:12.401 "num_base_bdevs": 2, 00:11:12.401 "num_base_bdevs_discovered": 2, 00:11:12.401 "num_base_bdevs_operational": 2, 00:11:12.401 "base_bdevs_list": [ 00:11:12.401 { 00:11:12.401 "name": "BaseBdev1", 00:11:12.401 "uuid": "df97a5e8-ddae-5a2b-86a0-a2de043500ac", 00:11:12.401 "is_configured": true, 00:11:12.401 "data_offset": 0, 00:11:12.401 "data_size": 65536 00:11:12.401 }, 00:11:12.401 { 00:11:12.401 "name": "BaseBdev2", 00:11:12.401 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:12.401 "is_configured": true, 00:11:12.401 "data_offset": 0, 00:11:12.401 "data_size": 65536 00:11:12.401 } 00:11:12.401 ] 00:11:12.401 }' 00:11:12.401 01:55:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.401 01:55:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 [2024-12-07 01:55:18.221129] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 [2024-12-07 01:55:18.296759] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.968 "name": "raid_bdev1", 00:11:12.968 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:12.968 "strip_size_kb": 0, 00:11:12.968 "state": "online", 00:11:12.968 "raid_level": "raid1", 00:11:12.968 "superblock": false, 00:11:12.968 "num_base_bdevs": 2, 00:11:12.968 "num_base_bdevs_discovered": 1, 00:11:12.968 "num_base_bdevs_operational": 1, 00:11:12.968 "base_bdevs_list": [ 00:11:12.968 { 00:11:12.968 "name": null, 00:11:12.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.968 "is_configured": false, 00:11:12.968 "data_offset": 0, 00:11:12.968 "data_size": 65536 00:11:12.968 }, 00:11:12.968 { 00:11:12.968 "name": "BaseBdev2", 00:11:12.968 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:12.968 "is_configured": true, 00:11:12.968 "data_offset": 0, 00:11:12.968 "data_size": 65536 00:11:12.968 } 00:11:12.968 ] 00:11:12.968 }' 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.968 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.969 [2024-12-07 01:55:18.386555] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:12.969 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:12.969 Zero copy mechanism will not be used. 00:11:12.969 Running I/O for 60 seconds... 00:11:13.536 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:13.536 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.536 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.536 [2024-12-07 01:55:18.780755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:13.536 01:55:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.536 01:55:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:13.536 [2024-12-07 01:55:18.827982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:13.536 [2024-12-07 01:55:18.829872] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:13.536 [2024-12-07 01:55:18.942683] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.536 [2024-12-07 01:55:18.943069] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.796 [2024-12-07 01:55:19.056080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.796 [2024-12-07 01:55:19.056359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:14.056 [2024-12-07 01:55:19.284892] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:14.056 173.00 IOPS, 519.00 MiB/s [2024-12-07T01:55:19.518Z] [2024-12-07 01:55:19.499933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:14.056 [2024-12-07 01:55:19.500208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.624 "name": "raid_bdev1", 00:11:14.624 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:14.624 "strip_size_kb": 0, 00:11:14.624 "state": "online", 00:11:14.624 "raid_level": "raid1", 00:11:14.624 "superblock": false, 00:11:14.624 "num_base_bdevs": 2, 00:11:14.624 "num_base_bdevs_discovered": 2, 00:11:14.624 "num_base_bdevs_operational": 2, 00:11:14.624 "process": { 00:11:14.624 "type": "rebuild", 00:11:14.624 "target": "spare", 00:11:14.624 "progress": { 00:11:14.624 "blocks": 12288, 00:11:14.624 "percent": 18 00:11:14.624 } 00:11:14.624 }, 00:11:14.624 "base_bdevs_list": [ 00:11:14.624 { 00:11:14.624 "name": "spare", 00:11:14.624 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:14.624 "is_configured": true, 00:11:14.624 "data_offset": 0, 00:11:14.624 "data_size": 65536 00:11:14.624 }, 00:11:14.624 { 00:11:14.624 "name": "BaseBdev2", 00:11:14.624 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:14.624 "is_configured": true, 00:11:14.624 "data_offset": 0, 00:11:14.624 "data_size": 65536 00:11:14.624 } 00:11:14.624 ] 00:11:14.624 }' 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.624 01:55:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 [2024-12-07 01:55:19.966381] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.624 [2024-12-07 01:55:19.972906] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:14.624 [2024-12-07 01:55:19.974235] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:14.624 [2024-12-07 01:55:19.981430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.624 [2024-12-07 01:55:19.981479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.624 [2024-12-07 01:55:19.981490] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:14.624 [2024-12-07 01:55:19.997946] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.624 "name": "raid_bdev1", 00:11:14.624 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:14.624 "strip_size_kb": 0, 00:11:14.624 "state": "online", 00:11:14.624 "raid_level": "raid1", 00:11:14.624 "superblock": false, 00:11:14.624 "num_base_bdevs": 2, 00:11:14.624 "num_base_bdevs_discovered": 1, 00:11:14.624 "num_base_bdevs_operational": 1, 00:11:14.624 "base_bdevs_list": [ 00:11:14.624 { 00:11:14.624 "name": null, 00:11:14.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.624 "is_configured": false, 00:11:14.624 "data_offset": 0, 00:11:14.624 "data_size": 65536 00:11:14.624 }, 00:11:14.624 { 00:11:14.624 "name": "BaseBdev2", 00:11:14.624 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:14.624 "is_configured": true, 00:11:14.624 "data_offset": 0, 00:11:14.624 "data_size": 65536 00:11:14.624 } 00:11:14.624 ] 00:11:14.624 }' 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.624 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.191 188.50 IOPS, 565.50 MiB/s [2024-12-07T01:55:20.653Z] 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:15.191 "name": "raid_bdev1", 00:11:15.191 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:15.191 "strip_size_kb": 0, 00:11:15.191 "state": "online", 00:11:15.191 "raid_level": "raid1", 00:11:15.191 "superblock": false, 00:11:15.191 "num_base_bdevs": 2, 00:11:15.191 "num_base_bdevs_discovered": 1, 00:11:15.191 "num_base_bdevs_operational": 1, 00:11:15.191 "base_bdevs_list": [ 00:11:15.191 { 00:11:15.191 "name": null, 00:11:15.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.191 "is_configured": false, 00:11:15.191 "data_offset": 0, 00:11:15.191 "data_size": 65536 00:11:15.191 }, 00:11:15.191 { 00:11:15.191 "name": "BaseBdev2", 00:11:15.191 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:15.191 "is_configured": true, 00:11:15.191 "data_offset": 0, 00:11:15.191 "data_size": 65536 00:11:15.191 } 00:11:15.191 ] 00:11:15.191 }' 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:15.191 [2024-12-07 01:55:20.625343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.191 01:55:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:15.449 [2024-12-07 01:55:20.663838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:15.449 [2024-12-07 01:55:20.665737] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:15.449 [2024-12-07 01:55:20.790092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.449 [2024-12-07 01:55:20.790597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.709 [2024-12-07 01:55:21.002045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.709 [2024-12-07 01:55:21.002323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.968 [2024-12-07 01:55:21.341834] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:16.228 174.00 IOPS, 522.00 MiB/s [2024-12-07T01:55:21.690Z] [2024-12-07 01:55:21.444527] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.228 [2024-12-07 01:55:21.444812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.228 01:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.487 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.487 "name": "raid_bdev1", 00:11:16.487 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:16.487 "strip_size_kb": 0, 00:11:16.487 "state": "online", 00:11:16.487 "raid_level": "raid1", 00:11:16.487 "superblock": false, 00:11:16.487 "num_base_bdevs": 2, 00:11:16.487 "num_base_bdevs_discovered": 2, 00:11:16.487 "num_base_bdevs_operational": 2, 00:11:16.487 "process": { 00:11:16.487 "type": "rebuild", 00:11:16.487 "target": "spare", 00:11:16.487 "progress": { 00:11:16.488 "blocks": 12288, 00:11:16.488 "percent": 18 00:11:16.488 } 00:11:16.488 }, 00:11:16.488 "base_bdevs_list": [ 00:11:16.488 { 00:11:16.488 "name": "spare", 00:11:16.488 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:16.488 "is_configured": true, 00:11:16.488 "data_offset": 0, 00:11:16.488 "data_size": 65536 00:11:16.488 }, 00:11:16.488 { 00:11:16.488 "name": "BaseBdev2", 00:11:16.488 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:16.488 "is_configured": true, 00:11:16.488 "data_offset": 0, 00:11:16.488 "data_size": 65536 00:11:16.488 } 00:11:16.488 ] 00:11:16.488 }' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.488 [2024-12-07 01:55:21.773870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=319 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.488 "name": "raid_bdev1", 00:11:16.488 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:16.488 "strip_size_kb": 0, 00:11:16.488 "state": "online", 00:11:16.488 "raid_level": "raid1", 00:11:16.488 "superblock": false, 00:11:16.488 "num_base_bdevs": 2, 00:11:16.488 "num_base_bdevs_discovered": 2, 00:11:16.488 "num_base_bdevs_operational": 2, 00:11:16.488 "process": { 00:11:16.488 "type": "rebuild", 00:11:16.488 "target": "spare", 00:11:16.488 "progress": { 00:11:16.488 "blocks": 14336, 00:11:16.488 "percent": 21 00:11:16.488 } 00:11:16.488 }, 00:11:16.488 "base_bdevs_list": [ 00:11:16.488 { 00:11:16.488 "name": "spare", 00:11:16.488 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:16.488 "is_configured": true, 00:11:16.488 "data_offset": 0, 00:11:16.488 "data_size": 65536 00:11:16.488 }, 00:11:16.488 { 00:11:16.488 "name": "BaseBdev2", 00:11:16.488 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:16.488 "is_configured": true, 00:11:16.488 "data_offset": 0, 00:11:16.488 "data_size": 65536 00:11:16.488 } 00:11:16.488 ] 00:11:16.488 }' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.488 [2024-12-07 01:55:21.875168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.488 01:55:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:16.747 [2024-12-07 01:55:22.094868] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:16.747 [2024-12-07 01:55:22.095349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:17.045 [2024-12-07 01:55:22.333637] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:17.317 143.25 IOPS, 429.75 MiB/s [2024-12-07T01:55:22.779Z] [2024-12-07 01:55:22.554738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:11:17.317 [2024-12-07 01:55:22.666747] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:17.576 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:17.576 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.576 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.576 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.576 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.577 "name": "raid_bdev1", 00:11:17.577 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:17.577 "strip_size_kb": 0, 00:11:17.577 "state": "online", 00:11:17.577 "raid_level": "raid1", 00:11:17.577 "superblock": false, 00:11:17.577 "num_base_bdevs": 2, 00:11:17.577 "num_base_bdevs_discovered": 2, 00:11:17.577 "num_base_bdevs_operational": 2, 00:11:17.577 "process": { 00:11:17.577 "type": "rebuild", 00:11:17.577 "target": "spare", 00:11:17.577 "progress": { 00:11:17.577 "blocks": 30720, 00:11:17.577 "percent": 46 00:11:17.577 } 00:11:17.577 }, 00:11:17.577 "base_bdevs_list": [ 00:11:17.577 { 00:11:17.577 "name": "spare", 00:11:17.577 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:17.577 "is_configured": true, 00:11:17.577 "data_offset": 0, 00:11:17.577 "data_size": 65536 00:11:17.577 }, 00:11:17.577 { 00:11:17.577 "name": "BaseBdev2", 00:11:17.577 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:17.577 "is_configured": true, 00:11:17.577 "data_offset": 0, 00:11:17.577 "data_size": 65536 00:11:17.577 } 00:11:17.577 ] 00:11:17.577 }' 00:11:17.577 01:55:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.577 01:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:17.577 01:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.835 01:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.835 01:55:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:17.835 [2024-12-07 01:55:23.105381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:17.835 [2024-12-07 01:55:23.105615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:11:18.355 126.60 IOPS, 379.80 MiB/s [2024-12-07T01:55:23.817Z] [2024-12-07 01:55:23.751983] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.922 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.922 "name": "raid_bdev1", 00:11:18.922 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:18.922 "strip_size_kb": 0, 00:11:18.922 "state": "online", 00:11:18.922 "raid_level": "raid1", 00:11:18.922 "superblock": false, 00:11:18.922 "num_base_bdevs": 2, 00:11:18.922 "num_base_bdevs_discovered": 2, 00:11:18.922 "num_base_bdevs_operational": 2, 00:11:18.922 "process": { 00:11:18.922 "type": "rebuild", 00:11:18.922 "target": "spare", 00:11:18.922 "progress": { 00:11:18.922 "blocks": 49152, 00:11:18.922 "percent": 75 00:11:18.923 } 00:11:18.923 }, 00:11:18.923 "base_bdevs_list": [ 00:11:18.923 { 00:11:18.923 "name": "spare", 00:11:18.923 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:18.923 "is_configured": true, 00:11:18.923 "data_offset": 0, 00:11:18.923 "data_size": 65536 00:11:18.923 }, 00:11:18.923 { 00:11:18.923 "name": "BaseBdev2", 00:11:18.923 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:18.923 "is_configured": true, 00:11:18.923 "data_offset": 0, 00:11:18.923 "data_size": 65536 00:11:18.923 } 00:11:18.923 ] 00:11:18.923 }' 00:11:18.923 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.923 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.923 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.923 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.923 01:55:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.440 113.67 IOPS, 341.00 MiB/s [2024-12-07T01:55:24.902Z] [2024-12-07 01:55:24.875260] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:19.699 [2024-12-07 01:55:24.942744] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:19.699 [2024-12-07 01:55:24.944586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.957 "name": "raid_bdev1", 00:11:19.957 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:19.957 "strip_size_kb": 0, 00:11:19.957 "state": "online", 00:11:19.957 "raid_level": "raid1", 00:11:19.957 "superblock": false, 00:11:19.957 "num_base_bdevs": 2, 00:11:19.957 "num_base_bdevs_discovered": 2, 00:11:19.957 "num_base_bdevs_operational": 2, 00:11:19.957 "base_bdevs_list": [ 00:11:19.957 { 00:11:19.957 "name": "spare", 00:11:19.957 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:19.957 "is_configured": true, 00:11:19.957 "data_offset": 0, 00:11:19.957 "data_size": 65536 00:11:19.957 }, 00:11:19.957 { 00:11:19.957 "name": "BaseBdev2", 00:11:19.957 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:19.957 "is_configured": true, 00:11:19.957 "data_offset": 0, 00:11:19.957 "data_size": 65536 00:11:19.957 } 00:11:19.957 ] 00:11:19.957 }' 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.957 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.957 102.43 IOPS, 307.29 MiB/s [2024-12-07T01:55:25.419Z] 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:20.215 "name": "raid_bdev1", 00:11:20.215 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:20.215 "strip_size_kb": 0, 00:11:20.215 "state": "online", 00:11:20.215 "raid_level": "raid1", 00:11:20.215 "superblock": false, 00:11:20.215 "num_base_bdevs": 2, 00:11:20.215 "num_base_bdevs_discovered": 2, 00:11:20.215 "num_base_bdevs_operational": 2, 00:11:20.215 "base_bdevs_list": [ 00:11:20.215 { 00:11:20.215 "name": "spare", 00:11:20.215 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:20.215 "is_configured": true, 00:11:20.215 "data_offset": 0, 00:11:20.215 "data_size": 65536 00:11:20.215 }, 00:11:20.215 { 00:11:20.215 "name": "BaseBdev2", 00:11:20.215 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:20.215 "is_configured": true, 00:11:20.215 "data_offset": 0, 00:11:20.215 "data_size": 65536 00:11:20.215 } 00:11:20.215 ] 00:11:20.215 }' 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.215 "name": "raid_bdev1", 00:11:20.215 "uuid": "f4616664-166c-4c89-ad16-1757ad01ba3d", 00:11:20.215 "strip_size_kb": 0, 00:11:20.215 "state": "online", 00:11:20.215 "raid_level": "raid1", 00:11:20.215 "superblock": false, 00:11:20.215 "num_base_bdevs": 2, 00:11:20.215 "num_base_bdevs_discovered": 2, 00:11:20.215 "num_base_bdevs_operational": 2, 00:11:20.215 "base_bdevs_list": [ 00:11:20.215 { 00:11:20.215 "name": "spare", 00:11:20.215 "uuid": "319c1ae7-9180-544e-8502-6b8ecdb76e69", 00:11:20.215 "is_configured": true, 00:11:20.215 "data_offset": 0, 00:11:20.215 "data_size": 65536 00:11:20.215 }, 00:11:20.215 { 00:11:20.215 "name": "BaseBdev2", 00:11:20.215 "uuid": "acf265a0-1b21-5b62-8624-e2eb1eae49bf", 00:11:20.215 "is_configured": true, 00:11:20.215 "data_offset": 0, 00:11:20.215 "data_size": 65536 00:11:20.215 } 00:11:20.215 ] 00:11:20.215 }' 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.215 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.472 01:55:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.472 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.472 01:55:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.472 [2024-12-07 01:55:25.932361] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.730 [2024-12-07 01:55:25.932458] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.730 00:11:20.730 Latency(us) 00:11:20.730 [2024-12-07T01:55:26.192Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.730 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:20.730 raid_bdev1 : 7.64 96.23 288.68 0.00 0.00 14077.55 280.82 108062.85 00:11:20.730 [2024-12-07T01:55:26.192Z] =================================================================================================================== 00:11:20.730 [2024-12-07T01:55:26.192Z] Total : 96.23 288.68 0.00 0.00 14077.55 280.82 108062.85 00:11:20.730 [2024-12-07 01:55:26.015310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.730 [2024-12-07 01:55:26.015388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.730 [2024-12-07 01:55:26.015504] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.730 [2024-12-07 01:55:26.015553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:20.730 { 00:11:20.730 "results": [ 00:11:20.730 { 00:11:20.730 "job": "raid_bdev1", 00:11:20.730 "core_mask": "0x1", 00:11:20.730 "workload": "randrw", 00:11:20.730 "percentage": 50, 00:11:20.730 "status": "finished", 00:11:20.730 "queue_depth": 2, 00:11:20.730 "io_size": 3145728, 00:11:20.730 "runtime": 7.638332, 00:11:20.730 "iops": 96.22519680998417, 00:11:20.730 "mibps": 288.67559042995254, 00:11:20.730 "io_failed": 0, 00:11:20.730 "io_timeout": 0, 00:11:20.730 "avg_latency_us": 14077.547911950805, 00:11:20.730 "min_latency_us": 280.8174672489083, 00:11:20.730 "max_latency_us": 108062.85414847161 00:11:20.730 } 00:11:20.730 ], 00:11:20.730 "core_count": 1 00:11:20.730 } 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.730 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:20.987 /dev/nbd0 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.988 1+0 records in 00:11:20.988 1+0 records out 00:11:20.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418929 s, 9.8 MB/s 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.988 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:21.245 /dev/nbd1 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:21.245 1+0 records in 00:11:21.245 1+0 records out 00:11:21.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561016 s, 7.3 MB/s 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.245 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.503 01:55:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:21.761 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.761 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.761 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.761 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.761 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.761 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86851 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 86851 ']' 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 86851 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86851 00:11:21.762 killing process with pid 86851 00:11:21.762 Received shutdown signal, test time was about 8.710700 seconds 00:11:21.762 00:11:21.762 Latency(us) 00:11:21.762 [2024-12-07T01:55:27.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:21.762 [2024-12-07T01:55:27.224Z] =================================================================================================================== 00:11:21.762 [2024-12-07T01:55:27.224Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86851' 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 86851 00:11:21.762 [2024-12-07 01:55:27.082446] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.762 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 86851 00:11:21.762 [2024-12-07 01:55:27.107991] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:22.022 00:11:22.022 real 0m10.587s 00:11:22.022 user 0m13.717s 00:11:22.022 sys 0m1.383s 00:11:22.022 ************************************ 00:11:22.022 END TEST raid_rebuild_test_io 00:11:22.022 ************************************ 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 01:55:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:22.022 01:55:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:22.022 01:55:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:22.022 01:55:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.022 ************************************ 00:11:22.022 START TEST raid_rebuild_test_sb_io 00:11:22.022 ************************************ 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87211 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87211 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 87211 ']' 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:22.022 01:55:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.281 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:22.281 Zero copy mechanism will not be used. 00:11:22.281 [2024-12-07 01:55:27.507216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:22.281 [2024-12-07 01:55:27.507322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87211 ] 00:11:22.281 [2024-12-07 01:55:27.649408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.281 [2024-12-07 01:55:27.692761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.281 [2024-12-07 01:55:27.733794] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:22.281 [2024-12-07 01:55:27.733834] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 BaseBdev1_malloc 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 [2024-12-07 01:55:28.355898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:23.217 [2024-12-07 01:55:28.356010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.217 [2024-12-07 01:55:28.356059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:23.217 [2024-12-07 01:55:28.356103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.217 [2024-12-07 01:55:28.358211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.217 [2024-12-07 01:55:28.358282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.217 BaseBdev1 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 BaseBdev2_malloc 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 [2024-12-07 01:55:28.400434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:23.217 [2024-12-07 01:55:28.400542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.217 [2024-12-07 01:55:28.400593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:23.217 [2024-12-07 01:55:28.400618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.217 [2024-12-07 01:55:28.405044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.217 [2024-12-07 01:55:28.405100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.217 BaseBdev2 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 spare_malloc 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 spare_delay 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 [2024-12-07 01:55:28.442704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:23.217 [2024-12-07 01:55:28.442802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.217 [2024-12-07 01:55:28.442840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.217 [2024-12-07 01:55:28.442867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.217 [2024-12-07 01:55:28.444974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.217 [2024-12-07 01:55:28.445005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:23.217 spare 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 [2024-12-07 01:55:28.454751] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.217 [2024-12-07 01:55:28.456496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.217 [2024-12-07 01:55:28.456649] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:23.217 [2024-12-07 01:55:28.456675] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:23.217 [2024-12-07 01:55:28.456933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:23.217 [2024-12-07 01:55:28.457058] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:23.217 [2024-12-07 01:55:28.457071] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:23.217 [2024-12-07 01:55:28.457189] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.217 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.217 "name": "raid_bdev1", 00:11:23.217 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:23.217 "strip_size_kb": 0, 00:11:23.217 "state": "online", 00:11:23.217 "raid_level": "raid1", 00:11:23.217 "superblock": true, 00:11:23.217 "num_base_bdevs": 2, 00:11:23.217 "num_base_bdevs_discovered": 2, 00:11:23.217 "num_base_bdevs_operational": 2, 00:11:23.217 "base_bdevs_list": [ 00:11:23.217 { 00:11:23.217 "name": "BaseBdev1", 00:11:23.217 "uuid": "9bfe7595-26ef-5d2b-8986-1044c80ee406", 00:11:23.217 "is_configured": true, 00:11:23.217 "data_offset": 2048, 00:11:23.217 "data_size": 63488 00:11:23.217 }, 00:11:23.217 { 00:11:23.217 "name": "BaseBdev2", 00:11:23.217 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:23.217 "is_configured": true, 00:11:23.217 "data_offset": 2048, 00:11:23.217 "data_size": 63488 00:11:23.218 } 00:11:23.218 ] 00:11:23.218 }' 00:11:23.218 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.218 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.785 [2024-12-07 01:55:28.950067] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:23.785 01:55:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.785 [2024-12-07 01:55:29.049677] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.785 "name": "raid_bdev1", 00:11:23.785 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:23.785 "strip_size_kb": 0, 00:11:23.785 "state": "online", 00:11:23.785 "raid_level": "raid1", 00:11:23.785 "superblock": true, 00:11:23.785 "num_base_bdevs": 2, 00:11:23.785 "num_base_bdevs_discovered": 1, 00:11:23.785 "num_base_bdevs_operational": 1, 00:11:23.785 "base_bdevs_list": [ 00:11:23.785 { 00:11:23.785 "name": null, 00:11:23.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.785 "is_configured": false, 00:11:23.785 "data_offset": 0, 00:11:23.785 "data_size": 63488 00:11:23.785 }, 00:11:23.785 { 00:11:23.785 "name": "BaseBdev2", 00:11:23.785 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:23.785 "is_configured": true, 00:11:23.785 "data_offset": 2048, 00:11:23.785 "data_size": 63488 00:11:23.785 } 00:11:23.785 ] 00:11:23.785 }' 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.785 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.785 [2024-12-07 01:55:29.139482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:23.785 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:23.785 Zero copy mechanism will not be used. 00:11:23.785 Running I/O for 60 seconds... 00:11:24.352 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:24.352 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.352 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.352 [2024-12-07 01:55:29.522613] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.352 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.352 01:55:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:24.352 [2024-12-07 01:55:29.548205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:24.352 [2024-12-07 01:55:29.550180] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:24.352 [2024-12-07 01:55:29.662616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:24.352 [2024-12-07 01:55:29.663261] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:24.611 [2024-12-07 01:55:29.897330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:24.611 [2024-12-07 01:55:29.897648] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:24.870 172.00 IOPS, 516.00 MiB/s [2024-12-07T01:55:30.332Z] [2024-12-07 01:55:30.248126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:25.128 [2024-12-07 01:55:30.367317] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.128 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.388 "name": "raid_bdev1", 00:11:25.388 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:25.388 "strip_size_kb": 0, 00:11:25.388 "state": "online", 00:11:25.388 "raid_level": "raid1", 00:11:25.388 "superblock": true, 00:11:25.388 "num_base_bdevs": 2, 00:11:25.388 "num_base_bdevs_discovered": 2, 00:11:25.388 "num_base_bdevs_operational": 2, 00:11:25.388 "process": { 00:11:25.388 "type": "rebuild", 00:11:25.388 "target": "spare", 00:11:25.388 "progress": { 00:11:25.388 "blocks": 10240, 00:11:25.388 "percent": 16 00:11:25.388 } 00:11:25.388 }, 00:11:25.388 "base_bdevs_list": [ 00:11:25.388 { 00:11:25.388 "name": "spare", 00:11:25.388 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:25.388 "is_configured": true, 00:11:25.388 "data_offset": 2048, 00:11:25.388 "data_size": 63488 00:11:25.388 }, 00:11:25.388 { 00:11:25.388 "name": "BaseBdev2", 00:11:25.388 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:25.388 "is_configured": true, 00:11:25.388 "data_offset": 2048, 00:11:25.388 "data_size": 63488 00:11:25.388 } 00:11:25.388 ] 00:11:25.388 }' 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.388 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.388 [2024-12-07 01:55:30.705505] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.388 [2024-12-07 01:55:30.818341] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:25.388 [2024-12-07 01:55:30.820224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.388 [2024-12-07 01:55:30.820315] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.388 [2024-12-07 01:55:30.820343] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:25.388 [2024-12-07 01:55:30.837019] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.647 "name": "raid_bdev1", 00:11:25.647 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:25.647 "strip_size_kb": 0, 00:11:25.647 "state": "online", 00:11:25.647 "raid_level": "raid1", 00:11:25.647 "superblock": true, 00:11:25.647 "num_base_bdevs": 2, 00:11:25.647 "num_base_bdevs_discovered": 1, 00:11:25.647 "num_base_bdevs_operational": 1, 00:11:25.647 "base_bdevs_list": [ 00:11:25.647 { 00:11:25.647 "name": null, 00:11:25.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.647 "is_configured": false, 00:11:25.647 "data_offset": 0, 00:11:25.647 "data_size": 63488 00:11:25.647 }, 00:11:25.647 { 00:11:25.647 "name": "BaseBdev2", 00:11:25.647 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:25.647 "is_configured": true, 00:11:25.647 "data_offset": 2048, 00:11:25.647 "data_size": 63488 00:11:25.647 } 00:11:25.647 ] 00:11:25.647 }' 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.647 01:55:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.906 168.00 IOPS, 504.00 MiB/s [2024-12-07T01:55:31.368Z] 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.906 "name": "raid_bdev1", 00:11:25.906 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:25.906 "strip_size_kb": 0, 00:11:25.906 "state": "online", 00:11:25.906 "raid_level": "raid1", 00:11:25.906 "superblock": true, 00:11:25.906 "num_base_bdevs": 2, 00:11:25.906 "num_base_bdevs_discovered": 1, 00:11:25.906 "num_base_bdevs_operational": 1, 00:11:25.906 "base_bdevs_list": [ 00:11:25.906 { 00:11:25.906 "name": null, 00:11:25.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.906 "is_configured": false, 00:11:25.906 "data_offset": 0, 00:11:25.906 "data_size": 63488 00:11:25.906 }, 00:11:25.906 { 00:11:25.906 "name": "BaseBdev2", 00:11:25.906 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:25.906 "is_configured": true, 00:11:25.906 "data_offset": 2048, 00:11:25.906 "data_size": 63488 00:11:25.906 } 00:11:25.906 ] 00:11:25.906 }' 00:11:25.906 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.190 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.191 [2024-12-07 01:55:31.453628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.191 01:55:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:26.191 [2024-12-07 01:55:31.490592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:26.191 [2024-12-07 01:55:31.492531] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:26.191 [2024-12-07 01:55:31.596767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:26.191 [2024-12-07 01:55:31.597220] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:26.448 [2024-12-07 01:55:31.820785] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:26.448 [2024-12-07 01:55:31.821048] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:26.706 186.33 IOPS, 559.00 MiB/s [2024-12-07T01:55:32.168Z] [2024-12-07 01:55:32.149959] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:26.963 [2024-12-07 01:55:32.358598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:26.963 [2024-12-07 01:55:32.359034] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.221 "name": "raid_bdev1", 00:11:27.221 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:27.221 "strip_size_kb": 0, 00:11:27.221 "state": "online", 00:11:27.221 "raid_level": "raid1", 00:11:27.221 "superblock": true, 00:11:27.221 "num_base_bdevs": 2, 00:11:27.221 "num_base_bdevs_discovered": 2, 00:11:27.221 "num_base_bdevs_operational": 2, 00:11:27.221 "process": { 00:11:27.221 "type": "rebuild", 00:11:27.221 "target": "spare", 00:11:27.221 "progress": { 00:11:27.221 "blocks": 10240, 00:11:27.221 "percent": 16 00:11:27.221 } 00:11:27.221 }, 00:11:27.221 "base_bdevs_list": [ 00:11:27.221 { 00:11:27.221 "name": "spare", 00:11:27.221 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:27.221 "is_configured": true, 00:11:27.221 "data_offset": 2048, 00:11:27.221 "data_size": 63488 00:11:27.221 }, 00:11:27.221 { 00:11:27.221 "name": "BaseBdev2", 00:11:27.221 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:27.221 "is_configured": true, 00:11:27.221 "data_offset": 2048, 00:11:27.221 "data_size": 63488 00:11:27.221 } 00:11:27.221 ] 00:11:27.221 }' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:27.221 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=330 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.221 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.221 "name": "raid_bdev1", 00:11:27.221 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:27.221 "strip_size_kb": 0, 00:11:27.221 "state": "online", 00:11:27.221 "raid_level": "raid1", 00:11:27.221 "superblock": true, 00:11:27.221 "num_base_bdevs": 2, 00:11:27.221 "num_base_bdevs_discovered": 2, 00:11:27.221 "num_base_bdevs_operational": 2, 00:11:27.221 "process": { 00:11:27.221 "type": "rebuild", 00:11:27.221 "target": "spare", 00:11:27.221 "progress": { 00:11:27.221 "blocks": 12288, 00:11:27.221 "percent": 19 00:11:27.221 } 00:11:27.221 }, 00:11:27.221 "base_bdevs_list": [ 00:11:27.221 { 00:11:27.221 "name": "spare", 00:11:27.221 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:27.221 "is_configured": true, 00:11:27.221 "data_offset": 2048, 00:11:27.222 "data_size": 63488 00:11:27.222 }, 00:11:27.222 { 00:11:27.222 "name": "BaseBdev2", 00:11:27.222 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:27.222 "is_configured": true, 00:11:27.222 "data_offset": 2048, 00:11:27.222 "data_size": 63488 00:11:27.222 } 00:11:27.222 ] 00:11:27.222 }' 00:11:27.222 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.479 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:27.480 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.480 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:27.480 01:55:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:27.480 [2024-12-07 01:55:32.818853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:27.737 155.00 IOPS, 465.00 MiB/s [2024-12-07T01:55:33.199Z] [2024-12-07 01:55:33.152897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:28.672 "name": "raid_bdev1", 00:11:28.672 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:28.672 "strip_size_kb": 0, 00:11:28.672 "state": "online", 00:11:28.672 "raid_level": "raid1", 00:11:28.672 "superblock": true, 00:11:28.672 "num_base_bdevs": 2, 00:11:28.672 "num_base_bdevs_discovered": 2, 00:11:28.672 "num_base_bdevs_operational": 2, 00:11:28.672 "process": { 00:11:28.672 "type": "rebuild", 00:11:28.672 "target": "spare", 00:11:28.672 "progress": { 00:11:28.672 "blocks": 28672, 00:11:28.672 "percent": 45 00:11:28.672 } 00:11:28.672 }, 00:11:28.672 "base_bdevs_list": [ 00:11:28.672 { 00:11:28.672 "name": "spare", 00:11:28.672 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:28.672 "is_configured": true, 00:11:28.672 "data_offset": 2048, 00:11:28.672 "data_size": 63488 00:11:28.672 }, 00:11:28.672 { 00:11:28.672 "name": "BaseBdev2", 00:11:28.672 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:28.672 "is_configured": true, 00:11:28.672 "data_offset": 2048, 00:11:28.672 "data_size": 63488 00:11:28.672 } 00:11:28.672 ] 00:11:28.672 }' 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:28.672 01:55:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:28.930 135.60 IOPS, 406.80 MiB/s [2024-12-07T01:55:34.392Z] [2024-12-07 01:55:34.336695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:28.930 [2024-12-07 01:55:34.336930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:29.188 [2024-12-07 01:55:34.641329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.752 [2024-12-07 01:55:34.944369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.752 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.752 "name": "raid_bdev1", 00:11:29.752 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:29.752 "strip_size_kb": 0, 00:11:29.752 "state": "online", 00:11:29.752 "raid_level": "raid1", 00:11:29.752 "superblock": true, 00:11:29.752 "num_base_bdevs": 2, 00:11:29.752 "num_base_bdevs_discovered": 2, 00:11:29.752 "num_base_bdevs_operational": 2, 00:11:29.752 "process": { 00:11:29.752 "type": "rebuild", 00:11:29.752 "target": "spare", 00:11:29.752 "progress": { 00:11:29.752 "blocks": 49152, 00:11:29.752 "percent": 77 00:11:29.753 } 00:11:29.753 }, 00:11:29.753 "base_bdevs_list": [ 00:11:29.753 { 00:11:29.753 "name": "spare", 00:11:29.753 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:29.753 "is_configured": true, 00:11:29.753 "data_offset": 2048, 00:11:29.753 "data_size": 63488 00:11:29.753 }, 00:11:29.753 { 00:11:29.753 "name": "BaseBdev2", 00:11:29.753 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:29.753 "is_configured": true, 00:11:29.753 "data_offset": 2048, 00:11:29.753 "data_size": 63488 00:11:29.753 } 00:11:29.753 ] 00:11:29.753 }' 00:11:29.753 01:55:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.753 01:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:29.753 01:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.753 [2024-12-07 01:55:35.045962] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:29.753 [2024-12-07 01:55:35.046292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:29.753 01:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:29.753 01:55:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:30.317 119.50 IOPS, 358.50 MiB/s [2024-12-07T01:55:35.779Z] [2024-12-07 01:55:35.686162] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:30.575 [2024-12-07 01:55:35.791190] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:30.575 [2024-12-07 01:55:35.792872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.834 "name": "raid_bdev1", 00:11:30.834 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:30.834 "strip_size_kb": 0, 00:11:30.834 "state": "online", 00:11:30.834 "raid_level": "raid1", 00:11:30.834 "superblock": true, 00:11:30.834 "num_base_bdevs": 2, 00:11:30.834 "num_base_bdevs_discovered": 2, 00:11:30.834 "num_base_bdevs_operational": 2, 00:11:30.834 "base_bdevs_list": [ 00:11:30.834 { 00:11:30.834 "name": "spare", 00:11:30.834 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:30.834 "is_configured": true, 00:11:30.834 "data_offset": 2048, 00:11:30.834 "data_size": 63488 00:11:30.834 }, 00:11:30.834 { 00:11:30.834 "name": "BaseBdev2", 00:11:30.834 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:30.834 "is_configured": true, 00:11:30.834 "data_offset": 2048, 00:11:30.834 "data_size": 63488 00:11:30.834 } 00:11:30.834 ] 00:11:30.834 }' 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.834 106.71 IOPS, 320.14 MiB/s [2024-12-07T01:55:36.296Z] 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:30.834 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:30.835 "name": "raid_bdev1", 00:11:30.835 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:30.835 "strip_size_kb": 0, 00:11:30.835 "state": "online", 00:11:30.835 "raid_level": "raid1", 00:11:30.835 "superblock": true, 00:11:30.835 "num_base_bdevs": 2, 00:11:30.835 "num_base_bdevs_discovered": 2, 00:11:30.835 "num_base_bdevs_operational": 2, 00:11:30.835 "base_bdevs_list": [ 00:11:30.835 { 00:11:30.835 "name": "spare", 00:11:30.835 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:30.835 "is_configured": true, 00:11:30.835 "data_offset": 2048, 00:11:30.835 "data_size": 63488 00:11:30.835 }, 00:11:30.835 { 00:11:30.835 "name": "BaseBdev2", 00:11:30.835 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:30.835 "is_configured": true, 00:11:30.835 "data_offset": 2048, 00:11:30.835 "data_size": 63488 00:11:30.835 } 00:11:30.835 ] 00:11:30.835 }' 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:30.835 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.094 "name": "raid_bdev1", 00:11:31.094 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:31.094 "strip_size_kb": 0, 00:11:31.094 "state": "online", 00:11:31.094 "raid_level": "raid1", 00:11:31.094 "superblock": true, 00:11:31.094 "num_base_bdevs": 2, 00:11:31.094 "num_base_bdevs_discovered": 2, 00:11:31.094 "num_base_bdevs_operational": 2, 00:11:31.094 "base_bdevs_list": [ 00:11:31.094 { 00:11:31.094 "name": "spare", 00:11:31.094 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:31.094 "is_configured": true, 00:11:31.094 "data_offset": 2048, 00:11:31.094 "data_size": 63488 00:11:31.094 }, 00:11:31.094 { 00:11:31.094 "name": "BaseBdev2", 00:11:31.094 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:31.094 "is_configured": true, 00:11:31.094 "data_offset": 2048, 00:11:31.094 "data_size": 63488 00:11:31.094 } 00:11:31.094 ] 00:11:31.094 }' 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.094 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.663 [2024-12-07 01:55:36.838168] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:31.663 [2024-12-07 01:55:36.838197] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.663 00:11:31.663 Latency(us) 00:11:31.663 [2024-12-07T01:55:37.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:31.663 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:31.663 raid_bdev1 : 7.74 99.54 298.63 0.00 0.00 13331.99 280.82 108520.75 00:11:31.663 [2024-12-07T01:55:37.125Z] =================================================================================================================== 00:11:31.663 [2024-12-07T01:55:37.125Z] Total : 99.54 298.63 0.00 0.00 13331.99 280.82 108520.75 00:11:31.663 [2024-12-07 01:55:36.865381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.663 [2024-12-07 01:55:36.865426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.663 [2024-12-07 01:55:36.865509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.663 [2024-12-07 01:55:36.865522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:31.663 { 00:11:31.663 "results": [ 00:11:31.663 { 00:11:31.663 "job": "raid_bdev1", 00:11:31.663 "core_mask": "0x1", 00:11:31.663 "workload": "randrw", 00:11:31.663 "percentage": 50, 00:11:31.663 "status": "finished", 00:11:31.663 "queue_depth": 2, 00:11:31.663 "io_size": 3145728, 00:11:31.663 "runtime": 7.735326, 00:11:31.663 "iops": 99.54331595074338, 00:11:31.663 "mibps": 298.62994785223015, 00:11:31.663 "io_failed": 0, 00:11:31.663 "io_timeout": 0, 00:11:31.663 "avg_latency_us": 13331.99334883457, 00:11:31.663 "min_latency_us": 280.8174672489083, 00:11:31.663 "max_latency_us": 108520.74759825328 00:11:31.663 } 00:11:31.663 ], 00:11:31.663 "core_count": 1 00:11:31.663 } 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.663 01:55:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:31.922 /dev/nbd0 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.922 1+0 records in 00:11:31.922 1+0 records out 00:11:31.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485478 s, 8.4 MB/s 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.922 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:32.181 /dev/nbd1 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:32.181 1+0 records in 00:11:32.181 1+0 records out 00:11:32.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307179 s, 13.3 MB/s 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.181 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:32.440 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:32.441 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.441 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:32.441 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.441 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.699 [2024-12-07 01:55:37.971942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:32.699 [2024-12-07 01:55:37.972003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.699 [2024-12-07 01:55:37.972027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:32.699 [2024-12-07 01:55:37.972036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.699 [2024-12-07 01:55:37.974175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.699 [2024-12-07 01:55:37.974266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:32.699 [2024-12-07 01:55:37.974360] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:32.699 [2024-12-07 01:55:37.974396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:32.699 [2024-12-07 01:55:37.974516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.699 spare 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:32.699 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.700 01:55:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.700 [2024-12-07 01:55:38.074401] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:32.700 [2024-12-07 01:55:38.074429] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.700 [2024-12-07 01:55:38.074704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:11:32.700 [2024-12-07 01:55:38.074842] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:32.700 [2024-12-07 01:55:38.074851] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:32.700 [2024-12-07 01:55:38.074979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.700 "name": "raid_bdev1", 00:11:32.700 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:32.700 "strip_size_kb": 0, 00:11:32.700 "state": "online", 00:11:32.700 "raid_level": "raid1", 00:11:32.700 "superblock": true, 00:11:32.700 "num_base_bdevs": 2, 00:11:32.700 "num_base_bdevs_discovered": 2, 00:11:32.700 "num_base_bdevs_operational": 2, 00:11:32.700 "base_bdevs_list": [ 00:11:32.700 { 00:11:32.700 "name": "spare", 00:11:32.700 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:32.700 "is_configured": true, 00:11:32.700 "data_offset": 2048, 00:11:32.700 "data_size": 63488 00:11:32.700 }, 00:11:32.700 { 00:11:32.700 "name": "BaseBdev2", 00:11:32.700 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:32.700 "is_configured": true, 00:11:32.700 "data_offset": 2048, 00:11:32.700 "data_size": 63488 00:11:32.700 } 00:11:32.700 ] 00:11:32.700 }' 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.700 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:33.267 "name": "raid_bdev1", 00:11:33.267 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:33.267 "strip_size_kb": 0, 00:11:33.267 "state": "online", 00:11:33.267 "raid_level": "raid1", 00:11:33.267 "superblock": true, 00:11:33.267 "num_base_bdevs": 2, 00:11:33.267 "num_base_bdevs_discovered": 2, 00:11:33.267 "num_base_bdevs_operational": 2, 00:11:33.267 "base_bdevs_list": [ 00:11:33.267 { 00:11:33.267 "name": "spare", 00:11:33.267 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:33.267 "is_configured": true, 00:11:33.267 "data_offset": 2048, 00:11:33.267 "data_size": 63488 00:11:33.267 }, 00:11:33.267 { 00:11:33.267 "name": "BaseBdev2", 00:11:33.267 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:33.267 "is_configured": true, 00:11:33.267 "data_offset": 2048, 00:11:33.267 "data_size": 63488 00:11:33.267 } 00:11:33.267 ] 00:11:33.267 }' 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.267 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.526 [2024-12-07 01:55:38.770775] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.526 "name": "raid_bdev1", 00:11:33.526 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:33.526 "strip_size_kb": 0, 00:11:33.526 "state": "online", 00:11:33.526 "raid_level": "raid1", 00:11:33.526 "superblock": true, 00:11:33.526 "num_base_bdevs": 2, 00:11:33.526 "num_base_bdevs_discovered": 1, 00:11:33.526 "num_base_bdevs_operational": 1, 00:11:33.526 "base_bdevs_list": [ 00:11:33.526 { 00:11:33.526 "name": null, 00:11:33.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.526 "is_configured": false, 00:11:33.526 "data_offset": 0, 00:11:33.526 "data_size": 63488 00:11:33.526 }, 00:11:33.526 { 00:11:33.526 "name": "BaseBdev2", 00:11:33.526 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:33.526 "is_configured": true, 00:11:33.526 "data_offset": 2048, 00:11:33.526 "data_size": 63488 00:11:33.526 } 00:11:33.526 ] 00:11:33.526 }' 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.526 01:55:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.094 01:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:34.094 01:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.094 01:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:34.094 [2024-12-07 01:55:39.261997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.094 [2024-12-07 01:55:39.262248] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:34.094 [2024-12-07 01:55:39.262312] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:34.094 [2024-12-07 01:55:39.262372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:34.094 [2024-12-07 01:55:39.266687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:11:34.094 01:55:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.094 01:55:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:34.094 [2024-12-07 01:55:39.268643] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:35.033 "name": "raid_bdev1", 00:11:35.033 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:35.033 "strip_size_kb": 0, 00:11:35.033 "state": "online", 00:11:35.033 "raid_level": "raid1", 00:11:35.033 "superblock": true, 00:11:35.033 "num_base_bdevs": 2, 00:11:35.033 "num_base_bdevs_discovered": 2, 00:11:35.033 "num_base_bdevs_operational": 2, 00:11:35.033 "process": { 00:11:35.033 "type": "rebuild", 00:11:35.033 "target": "spare", 00:11:35.033 "progress": { 00:11:35.033 "blocks": 20480, 00:11:35.033 "percent": 32 00:11:35.033 } 00:11:35.033 }, 00:11:35.033 "base_bdevs_list": [ 00:11:35.033 { 00:11:35.033 "name": "spare", 00:11:35.033 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:35.033 "is_configured": true, 00:11:35.033 "data_offset": 2048, 00:11:35.033 "data_size": 63488 00:11:35.033 }, 00:11:35.033 { 00:11:35.033 "name": "BaseBdev2", 00:11:35.033 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:35.033 "is_configured": true, 00:11:35.033 "data_offset": 2048, 00:11:35.033 "data_size": 63488 00:11:35.033 } 00:11:35.033 ] 00:11:35.033 }' 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.033 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.033 [2024-12-07 01:55:40.428861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:35.033 [2024-12-07 01:55:40.472692] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:35.033 [2024-12-07 01:55:40.472805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.033 [2024-12-07 01:55:40.472839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:35.033 [2024-12-07 01:55:40.472851] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.034 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.292 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.292 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.292 "name": "raid_bdev1", 00:11:35.292 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:35.292 "strip_size_kb": 0, 00:11:35.292 "state": "online", 00:11:35.292 "raid_level": "raid1", 00:11:35.292 "superblock": true, 00:11:35.292 "num_base_bdevs": 2, 00:11:35.292 "num_base_bdevs_discovered": 1, 00:11:35.292 "num_base_bdevs_operational": 1, 00:11:35.292 "base_bdevs_list": [ 00:11:35.292 { 00:11:35.292 "name": null, 00:11:35.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.293 "is_configured": false, 00:11:35.293 "data_offset": 0, 00:11:35.293 "data_size": 63488 00:11:35.293 }, 00:11:35.293 { 00:11:35.293 "name": "BaseBdev2", 00:11:35.293 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:35.293 "is_configured": true, 00:11:35.293 "data_offset": 2048, 00:11:35.293 "data_size": 63488 00:11:35.293 } 00:11:35.293 ] 00:11:35.293 }' 00:11:35.293 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.293 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.551 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:35.551 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.551 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:35.551 [2024-12-07 01:55:40.960502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:35.551 [2024-12-07 01:55:40.960609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.551 [2024-12-07 01:55:40.960651] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:35.551 [2024-12-07 01:55:40.960695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.551 [2024-12-07 01:55:40.961155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.551 [2024-12-07 01:55:40.961221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:35.551 [2024-12-07 01:55:40.961334] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:35.551 [2024-12-07 01:55:40.961373] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:35.551 [2024-12-07 01:55:40.961426] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:35.551 [2024-12-07 01:55:40.961491] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:35.551 [2024-12-07 01:55:40.965787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:11:35.551 spare 00:11:35.551 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.551 01:55:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:35.551 [2024-12-07 01:55:40.967635] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.528 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.787 01:55:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:36.787 "name": "raid_bdev1", 00:11:36.787 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:36.787 "strip_size_kb": 0, 00:11:36.787 "state": "online", 00:11:36.787 "raid_level": "raid1", 00:11:36.787 "superblock": true, 00:11:36.787 "num_base_bdevs": 2, 00:11:36.787 "num_base_bdevs_discovered": 2, 00:11:36.787 "num_base_bdevs_operational": 2, 00:11:36.787 "process": { 00:11:36.787 "type": "rebuild", 00:11:36.787 "target": "spare", 00:11:36.787 "progress": { 00:11:36.787 "blocks": 20480, 00:11:36.787 "percent": 32 00:11:36.787 } 00:11:36.787 }, 00:11:36.787 "base_bdevs_list": [ 00:11:36.787 { 00:11:36.787 "name": "spare", 00:11:36.787 "uuid": "486b8c6c-fc09-55ff-b556-1f36cbca7a5e", 00:11:36.787 "is_configured": true, 00:11:36.787 "data_offset": 2048, 00:11:36.787 "data_size": 63488 00:11:36.787 }, 00:11:36.787 { 00:11:36.787 "name": "BaseBdev2", 00:11:36.787 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:36.787 "is_configured": true, 00:11:36.787 "data_offset": 2048, 00:11:36.787 "data_size": 63488 00:11:36.787 } 00:11:36.787 ] 00:11:36.787 }' 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.787 [2024-12-07 01:55:42.128041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.787 [2024-12-07 01:55:42.171898] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:36.787 [2024-12-07 01:55:42.171958] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.787 [2024-12-07 01:55:42.171974] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:36.787 [2024-12-07 01:55:42.171982] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.787 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.787 "name": "raid_bdev1", 00:11:36.787 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:36.787 "strip_size_kb": 0, 00:11:36.787 "state": "online", 00:11:36.787 "raid_level": "raid1", 00:11:36.787 "superblock": true, 00:11:36.787 "num_base_bdevs": 2, 00:11:36.787 "num_base_bdevs_discovered": 1, 00:11:36.787 "num_base_bdevs_operational": 1, 00:11:36.787 "base_bdevs_list": [ 00:11:36.787 { 00:11:36.787 "name": null, 00:11:36.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.787 "is_configured": false, 00:11:36.787 "data_offset": 0, 00:11:36.787 "data_size": 63488 00:11:36.788 }, 00:11:36.788 { 00:11:36.788 "name": "BaseBdev2", 00:11:36.788 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:36.788 "is_configured": true, 00:11:36.788 "data_offset": 2048, 00:11:36.788 "data_size": 63488 00:11:36.788 } 00:11:36.788 ] 00:11:36.788 }' 00:11:36.788 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.788 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:37.355 "name": "raid_bdev1", 00:11:37.355 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:37.355 "strip_size_kb": 0, 00:11:37.355 "state": "online", 00:11:37.355 "raid_level": "raid1", 00:11:37.355 "superblock": true, 00:11:37.355 "num_base_bdevs": 2, 00:11:37.355 "num_base_bdevs_discovered": 1, 00:11:37.355 "num_base_bdevs_operational": 1, 00:11:37.355 "base_bdevs_list": [ 00:11:37.355 { 00:11:37.355 "name": null, 00:11:37.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.355 "is_configured": false, 00:11:37.355 "data_offset": 0, 00:11:37.355 "data_size": 63488 00:11:37.355 }, 00:11:37.355 { 00:11:37.355 "name": "BaseBdev2", 00:11:37.355 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:37.355 "is_configured": true, 00:11:37.355 "data_offset": 2048, 00:11:37.355 "data_size": 63488 00:11:37.355 } 00:11:37.355 ] 00:11:37.355 }' 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:37.355 [2024-12-07 01:55:42.735618] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:37.355 [2024-12-07 01:55:42.735682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.355 [2024-12-07 01:55:42.735705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:37.355 [2024-12-07 01:55:42.735713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.355 [2024-12-07 01:55:42.736140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.355 [2024-12-07 01:55:42.736207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:37.355 [2024-12-07 01:55:42.736301] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:37.355 [2024-12-07 01:55:42.736315] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:37.355 [2024-12-07 01:55:42.736338] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:37.355 [2024-12-07 01:55:42.736348] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:37.355 BaseBdev1 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.355 01:55:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.292 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.551 "name": "raid_bdev1", 00:11:38.551 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:38.551 "strip_size_kb": 0, 00:11:38.551 "state": "online", 00:11:38.551 "raid_level": "raid1", 00:11:38.551 "superblock": true, 00:11:38.551 "num_base_bdevs": 2, 00:11:38.551 "num_base_bdevs_discovered": 1, 00:11:38.551 "num_base_bdevs_operational": 1, 00:11:38.551 "base_bdevs_list": [ 00:11:38.551 { 00:11:38.551 "name": null, 00:11:38.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.551 "is_configured": false, 00:11:38.551 "data_offset": 0, 00:11:38.551 "data_size": 63488 00:11:38.551 }, 00:11:38.551 { 00:11:38.551 "name": "BaseBdev2", 00:11:38.551 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:38.551 "is_configured": true, 00:11:38.551 "data_offset": 2048, 00:11:38.551 "data_size": 63488 00:11:38.551 } 00:11:38.551 ] 00:11:38.551 }' 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.551 01:55:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.810 "name": "raid_bdev1", 00:11:38.810 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:38.810 "strip_size_kb": 0, 00:11:38.810 "state": "online", 00:11:38.810 "raid_level": "raid1", 00:11:38.810 "superblock": true, 00:11:38.810 "num_base_bdevs": 2, 00:11:38.810 "num_base_bdevs_discovered": 1, 00:11:38.810 "num_base_bdevs_operational": 1, 00:11:38.810 "base_bdevs_list": [ 00:11:38.810 { 00:11:38.810 "name": null, 00:11:38.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.810 "is_configured": false, 00:11:38.810 "data_offset": 0, 00:11:38.810 "data_size": 63488 00:11:38.810 }, 00:11:38.810 { 00:11:38.810 "name": "BaseBdev2", 00:11:38.810 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:38.810 "is_configured": true, 00:11:38.810 "data_offset": 2048, 00:11:38.810 "data_size": 63488 00:11:38.810 } 00:11:38.810 ] 00:11:38.810 }' 00:11:38.810 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:39.069 [2024-12-07 01:55:44.377122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.069 [2024-12-07 01:55:44.377285] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:39.069 [2024-12-07 01:55:44.377300] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:39.069 request: 00:11:39.069 { 00:11:39.069 "base_bdev": "BaseBdev1", 00:11:39.069 "raid_bdev": "raid_bdev1", 00:11:39.069 "method": "bdev_raid_add_base_bdev", 00:11:39.069 "req_id": 1 00:11:39.069 } 00:11:39.069 Got JSON-RPC error response 00:11:39.069 response: 00:11:39.069 { 00:11:39.069 "code": -22, 00:11:39.069 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:39.069 } 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:39.069 01:55:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:40.004 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:40.004 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.004 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.004 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.004 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.005 "name": "raid_bdev1", 00:11:40.005 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:40.005 "strip_size_kb": 0, 00:11:40.005 "state": "online", 00:11:40.005 "raid_level": "raid1", 00:11:40.005 "superblock": true, 00:11:40.005 "num_base_bdevs": 2, 00:11:40.005 "num_base_bdevs_discovered": 1, 00:11:40.005 "num_base_bdevs_operational": 1, 00:11:40.005 "base_bdevs_list": [ 00:11:40.005 { 00:11:40.005 "name": null, 00:11:40.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.005 "is_configured": false, 00:11:40.005 "data_offset": 0, 00:11:40.005 "data_size": 63488 00:11:40.005 }, 00:11:40.005 { 00:11:40.005 "name": "BaseBdev2", 00:11:40.005 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:40.005 "is_configured": true, 00:11:40.005 "data_offset": 2048, 00:11:40.005 "data_size": 63488 00:11:40.005 } 00:11:40.005 ] 00:11:40.005 }' 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.005 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.570 "name": "raid_bdev1", 00:11:40.570 "uuid": "b16bd0ba-b8b8-4599-9a4a-4feb2985dc37", 00:11:40.570 "strip_size_kb": 0, 00:11:40.570 "state": "online", 00:11:40.570 "raid_level": "raid1", 00:11:40.570 "superblock": true, 00:11:40.570 "num_base_bdevs": 2, 00:11:40.570 "num_base_bdevs_discovered": 1, 00:11:40.570 "num_base_bdevs_operational": 1, 00:11:40.570 "base_bdevs_list": [ 00:11:40.570 { 00:11:40.570 "name": null, 00:11:40.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.570 "is_configured": false, 00:11:40.570 "data_offset": 0, 00:11:40.570 "data_size": 63488 00:11:40.570 }, 00:11:40.570 { 00:11:40.570 "name": "BaseBdev2", 00:11:40.570 "uuid": "810dae83-da9b-5cf0-9ec5-982a5ac0d3cb", 00:11:40.570 "is_configured": true, 00:11:40.570 "data_offset": 2048, 00:11:40.570 "data_size": 63488 00:11:40.570 } 00:11:40.570 ] 00:11:40.570 }' 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:40.570 01:55:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87211 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 87211 ']' 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 87211 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:40.570 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87211 00:11:40.831 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:40.831 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:40.831 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87211' 00:11:40.831 killing process with pid 87211 00:11:40.831 Received shutdown signal, test time was about 16.936442 seconds 00:11:40.831 00:11:40.831 Latency(us) 00:11:40.831 [2024-12-07T01:55:46.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:40.831 [2024-12-07T01:55:46.293Z] =================================================================================================================== 00:11:40.831 [2024-12-07T01:55:46.293Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:40.831 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 87211 00:11:40.831 [2024-12-07 01:55:46.045420] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.831 [2024-12-07 01:55:46.045556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.831 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 87211 00:11:40.831 [2024-12-07 01:55:46.045609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.831 [2024-12-07 01:55:46.045625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:40.831 [2024-12-07 01:55:46.071798] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:41.092 00:11:41.092 real 0m18.889s 00:11:41.092 user 0m25.374s 00:11:41.092 sys 0m2.198s 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:41.092 ************************************ 00:11:41.092 END TEST raid_rebuild_test_sb_io 00:11:41.092 ************************************ 00:11:41.092 01:55:46 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:41.092 01:55:46 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:41.092 01:55:46 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:41.092 01:55:46 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.092 01:55:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.092 ************************************ 00:11:41.092 START TEST raid_rebuild_test 00:11:41.092 ************************************ 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87893 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87893 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 87893 ']' 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:41.092 01:55:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.092 [2024-12-07 01:55:46.473681] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:41.092 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:41.092 Zero copy mechanism will not be used. 00:11:41.092 [2024-12-07 01:55:46.473894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87893 ] 00:11:41.352 [2024-12-07 01:55:46.618399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.352 [2024-12-07 01:55:46.662559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.352 [2024-12-07 01:55:46.704019] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.352 [2024-12-07 01:55:46.704055] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.919 BaseBdev1_malloc 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.919 [2024-12-07 01:55:47.301488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:41.919 [2024-12-07 01:55:47.301538] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.919 [2024-12-07 01:55:47.301577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:41.919 [2024-12-07 01:55:47.301590] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.919 [2024-12-07 01:55:47.303668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.919 [2024-12-07 01:55:47.303714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.919 BaseBdev1 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.919 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.920 BaseBdev2_malloc 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.920 [2024-12-07 01:55:47.346563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:41.920 [2024-12-07 01:55:47.346684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.920 [2024-12-07 01:55:47.346729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:41.920 [2024-12-07 01:55:47.346751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.920 [2024-12-07 01:55:47.351408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.920 [2024-12-07 01:55:47.351478] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.920 BaseBdev2 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.920 BaseBdev3_malloc 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.920 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.920 [2024-12-07 01:55:47.377257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:41.920 [2024-12-07 01:55:47.377328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.920 [2024-12-07 01:55:47.377372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:41.920 [2024-12-07 01:55:47.377381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.179 [2024-12-07 01:55:47.379767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.179 [2024-12-07 01:55:47.379807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:42.179 BaseBdev3 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.179 BaseBdev4_malloc 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.179 [2024-12-07 01:55:47.405791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:42.179 [2024-12-07 01:55:47.405838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.179 [2024-12-07 01:55:47.405859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:42.179 [2024-12-07 01:55:47.405867] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.179 [2024-12-07 01:55:47.408076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.179 [2024-12-07 01:55:47.408109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:42.179 BaseBdev4 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.179 spare_malloc 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.179 spare_delay 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.179 [2024-12-07 01:55:47.446066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:42.179 [2024-12-07 01:55:47.446108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.179 [2024-12-07 01:55:47.446143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:42.179 [2024-12-07 01:55:47.446151] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.179 [2024-12-07 01:55:47.448265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.179 [2024-12-07 01:55:47.448299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:42.179 spare 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.179 [2024-12-07 01:55:47.458108] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.179 [2024-12-07 01:55:47.459976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.179 [2024-12-07 01:55:47.460038] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.179 [2024-12-07 01:55:47.460084] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.179 [2024-12-07 01:55:47.460157] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:42.179 [2024-12-07 01:55:47.460165] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:42.179 [2024-12-07 01:55:47.460417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:42.179 [2024-12-07 01:55:47.460538] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:42.179 [2024-12-07 01:55:47.460549] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:42.179 [2024-12-07 01:55:47.460663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.179 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.180 "name": "raid_bdev1", 00:11:42.180 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:42.180 "strip_size_kb": 0, 00:11:42.180 "state": "online", 00:11:42.180 "raid_level": "raid1", 00:11:42.180 "superblock": false, 00:11:42.180 "num_base_bdevs": 4, 00:11:42.180 "num_base_bdevs_discovered": 4, 00:11:42.180 "num_base_bdevs_operational": 4, 00:11:42.180 "base_bdevs_list": [ 00:11:42.180 { 00:11:42.180 "name": "BaseBdev1", 00:11:42.180 "uuid": "e105d2ec-e481-5198-8878-dbbbab6a1f88", 00:11:42.180 "is_configured": true, 00:11:42.180 "data_offset": 0, 00:11:42.180 "data_size": 65536 00:11:42.180 }, 00:11:42.180 { 00:11:42.180 "name": "BaseBdev2", 00:11:42.180 "uuid": "649e0f97-81dc-5115-92ac-c3b3337572a6", 00:11:42.180 "is_configured": true, 00:11:42.180 "data_offset": 0, 00:11:42.180 "data_size": 65536 00:11:42.180 }, 00:11:42.180 { 00:11:42.180 "name": "BaseBdev3", 00:11:42.180 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:42.180 "is_configured": true, 00:11:42.180 "data_offset": 0, 00:11:42.180 "data_size": 65536 00:11:42.180 }, 00:11:42.180 { 00:11:42.180 "name": "BaseBdev4", 00:11:42.180 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:42.180 "is_configured": true, 00:11:42.180 "data_offset": 0, 00:11:42.180 "data_size": 65536 00:11:42.180 } 00:11:42.180 ] 00:11:42.180 }' 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.180 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.748 [2024-12-07 01:55:47.937590] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.748 01:55:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.748 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:43.007 [2024-12-07 01:55:48.212883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:43.007 /dev/nbd0 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.007 1+0 records in 00:11:43.007 1+0 records out 00:11:43.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374105 s, 10.9 MB/s 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:43.007 01:55:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:49.617 65536+0 records in 00:11:49.617 65536+0 records out 00:11:49.617 33554432 bytes (34 MB, 32 MiB) copied, 5.63203 s, 6.0 MB/s 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.617 01:55:53 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:49.617 [2024-12-07 01:55:54.107444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.617 [2024-12-07 01:55:54.143450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.617 "name": "raid_bdev1", 00:11:49.617 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:49.617 "strip_size_kb": 0, 00:11:49.617 "state": "online", 00:11:49.617 "raid_level": "raid1", 00:11:49.617 "superblock": false, 00:11:49.617 "num_base_bdevs": 4, 00:11:49.617 "num_base_bdevs_discovered": 3, 00:11:49.617 "num_base_bdevs_operational": 3, 00:11:49.617 "base_bdevs_list": [ 00:11:49.617 { 00:11:49.617 "name": null, 00:11:49.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.617 "is_configured": false, 00:11:49.617 "data_offset": 0, 00:11:49.617 "data_size": 65536 00:11:49.617 }, 00:11:49.617 { 00:11:49.617 "name": "BaseBdev2", 00:11:49.617 "uuid": "649e0f97-81dc-5115-92ac-c3b3337572a6", 00:11:49.617 "is_configured": true, 00:11:49.617 "data_offset": 0, 00:11:49.617 "data_size": 65536 00:11:49.617 }, 00:11:49.617 { 00:11:49.617 "name": "BaseBdev3", 00:11:49.617 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:49.617 "is_configured": true, 00:11:49.617 "data_offset": 0, 00:11:49.617 "data_size": 65536 00:11:49.617 }, 00:11:49.617 { 00:11:49.617 "name": "BaseBdev4", 00:11:49.617 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:49.617 "is_configured": true, 00:11:49.617 "data_offset": 0, 00:11:49.617 "data_size": 65536 00:11:49.617 } 00:11:49.617 ] 00:11:49.617 }' 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.617 [2024-12-07 01:55:54.606729] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.617 [2024-12-07 01:55:54.610075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:11:49.617 [2024-12-07 01:55:54.612037] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.617 01:55:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.181 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.438 "name": "raid_bdev1", 00:11:50.438 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:50.438 "strip_size_kb": 0, 00:11:50.438 "state": "online", 00:11:50.438 "raid_level": "raid1", 00:11:50.438 "superblock": false, 00:11:50.438 "num_base_bdevs": 4, 00:11:50.438 "num_base_bdevs_discovered": 4, 00:11:50.438 "num_base_bdevs_operational": 4, 00:11:50.438 "process": { 00:11:50.438 "type": "rebuild", 00:11:50.438 "target": "spare", 00:11:50.438 "progress": { 00:11:50.438 "blocks": 20480, 00:11:50.438 "percent": 31 00:11:50.438 } 00:11:50.438 }, 00:11:50.438 "base_bdevs_list": [ 00:11:50.438 { 00:11:50.438 "name": "spare", 00:11:50.438 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:50.438 "is_configured": true, 00:11:50.438 "data_offset": 0, 00:11:50.438 "data_size": 65536 00:11:50.438 }, 00:11:50.438 { 00:11:50.438 "name": "BaseBdev2", 00:11:50.438 "uuid": "649e0f97-81dc-5115-92ac-c3b3337572a6", 00:11:50.438 "is_configured": true, 00:11:50.438 "data_offset": 0, 00:11:50.438 "data_size": 65536 00:11:50.438 }, 00:11:50.438 { 00:11:50.438 "name": "BaseBdev3", 00:11:50.438 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:50.438 "is_configured": true, 00:11:50.438 "data_offset": 0, 00:11:50.438 "data_size": 65536 00:11:50.438 }, 00:11:50.438 { 00:11:50.438 "name": "BaseBdev4", 00:11:50.438 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:50.438 "is_configured": true, 00:11:50.438 "data_offset": 0, 00:11:50.438 "data_size": 65536 00:11:50.438 } 00:11:50.438 ] 00:11:50.438 }' 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.438 [2024-12-07 01:55:55.746986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.438 [2024-12-07 01:55:55.816773] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:50.438 [2024-12-07 01:55:55.816827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.438 [2024-12-07 01:55:55.816844] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:50.438 [2024-12-07 01:55:55.816851] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.438 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.439 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.439 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.439 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.439 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.439 "name": "raid_bdev1", 00:11:50.439 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:50.439 "strip_size_kb": 0, 00:11:50.439 "state": "online", 00:11:50.439 "raid_level": "raid1", 00:11:50.439 "superblock": false, 00:11:50.439 "num_base_bdevs": 4, 00:11:50.439 "num_base_bdevs_discovered": 3, 00:11:50.439 "num_base_bdevs_operational": 3, 00:11:50.439 "base_bdevs_list": [ 00:11:50.439 { 00:11:50.439 "name": null, 00:11:50.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.439 "is_configured": false, 00:11:50.439 "data_offset": 0, 00:11:50.439 "data_size": 65536 00:11:50.439 }, 00:11:50.439 { 00:11:50.439 "name": "BaseBdev2", 00:11:50.439 "uuid": "649e0f97-81dc-5115-92ac-c3b3337572a6", 00:11:50.439 "is_configured": true, 00:11:50.439 "data_offset": 0, 00:11:50.439 "data_size": 65536 00:11:50.439 }, 00:11:50.439 { 00:11:50.439 "name": "BaseBdev3", 00:11:50.439 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:50.439 "is_configured": true, 00:11:50.439 "data_offset": 0, 00:11:50.439 "data_size": 65536 00:11:50.439 }, 00:11:50.439 { 00:11:50.439 "name": "BaseBdev4", 00:11:50.439 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:50.439 "is_configured": true, 00:11:50.439 "data_offset": 0, 00:11:50.439 "data_size": 65536 00:11:50.439 } 00:11:50.439 ] 00:11:50.439 }' 00:11:50.439 01:55:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.439 01:55:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.004 "name": "raid_bdev1", 00:11:51.004 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:51.004 "strip_size_kb": 0, 00:11:51.004 "state": "online", 00:11:51.004 "raid_level": "raid1", 00:11:51.004 "superblock": false, 00:11:51.004 "num_base_bdevs": 4, 00:11:51.004 "num_base_bdevs_discovered": 3, 00:11:51.004 "num_base_bdevs_operational": 3, 00:11:51.004 "base_bdevs_list": [ 00:11:51.004 { 00:11:51.004 "name": null, 00:11:51.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.004 "is_configured": false, 00:11:51.004 "data_offset": 0, 00:11:51.004 "data_size": 65536 00:11:51.004 }, 00:11:51.004 { 00:11:51.004 "name": "BaseBdev2", 00:11:51.004 "uuid": "649e0f97-81dc-5115-92ac-c3b3337572a6", 00:11:51.004 "is_configured": true, 00:11:51.004 "data_offset": 0, 00:11:51.004 "data_size": 65536 00:11:51.004 }, 00:11:51.004 { 00:11:51.004 "name": "BaseBdev3", 00:11:51.004 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:51.004 "is_configured": true, 00:11:51.004 "data_offset": 0, 00:11:51.004 "data_size": 65536 00:11:51.004 }, 00:11:51.004 { 00:11:51.004 "name": "BaseBdev4", 00:11:51.004 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:51.004 "is_configured": true, 00:11:51.004 "data_offset": 0, 00:11:51.004 "data_size": 65536 00:11:51.004 } 00:11:51.004 ] 00:11:51.004 }' 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.004 [2024-12-07 01:55:56.379998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:51.004 [2024-12-07 01:55:56.383255] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:11:51.004 [2024-12-07 01:55:56.385167] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.004 01:55:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:51.937 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.937 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.937 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.937 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.937 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.195 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.195 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.195 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.195 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.195 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.195 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.195 "name": "raid_bdev1", 00:11:52.195 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:52.195 "strip_size_kb": 0, 00:11:52.195 "state": "online", 00:11:52.195 "raid_level": "raid1", 00:11:52.195 "superblock": false, 00:11:52.196 "num_base_bdevs": 4, 00:11:52.196 "num_base_bdevs_discovered": 4, 00:11:52.196 "num_base_bdevs_operational": 4, 00:11:52.196 "process": { 00:11:52.196 "type": "rebuild", 00:11:52.196 "target": "spare", 00:11:52.196 "progress": { 00:11:52.196 "blocks": 20480, 00:11:52.196 "percent": 31 00:11:52.196 } 00:11:52.196 }, 00:11:52.196 "base_bdevs_list": [ 00:11:52.196 { 00:11:52.196 "name": "spare", 00:11:52.196 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": "BaseBdev2", 00:11:52.196 "uuid": "649e0f97-81dc-5115-92ac-c3b3337572a6", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": "BaseBdev3", 00:11:52.196 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": "BaseBdev4", 00:11:52.196 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 } 00:11:52.196 ] 00:11:52.196 }' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.196 [2024-12-07 01:55:57.536065] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.196 [2024-12-07 01:55:57.589305] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.196 "name": "raid_bdev1", 00:11:52.196 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:52.196 "strip_size_kb": 0, 00:11:52.196 "state": "online", 00:11:52.196 "raid_level": "raid1", 00:11:52.196 "superblock": false, 00:11:52.196 "num_base_bdevs": 4, 00:11:52.196 "num_base_bdevs_discovered": 3, 00:11:52.196 "num_base_bdevs_operational": 3, 00:11:52.196 "process": { 00:11:52.196 "type": "rebuild", 00:11:52.196 "target": "spare", 00:11:52.196 "progress": { 00:11:52.196 "blocks": 24576, 00:11:52.196 "percent": 37 00:11:52.196 } 00:11:52.196 }, 00:11:52.196 "base_bdevs_list": [ 00:11:52.196 { 00:11:52.196 "name": "spare", 00:11:52.196 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": null, 00:11:52.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.196 "is_configured": false, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": "BaseBdev3", 00:11:52.196 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 }, 00:11:52.196 { 00:11:52.196 "name": "BaseBdev4", 00:11:52.196 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:52.196 "is_configured": true, 00:11:52.196 "data_offset": 0, 00:11:52.196 "data_size": 65536 00:11:52.196 } 00:11:52.196 ] 00:11:52.196 }' 00:11:52.196 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=355 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.457 "name": "raid_bdev1", 00:11:52.457 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:52.457 "strip_size_kb": 0, 00:11:52.457 "state": "online", 00:11:52.457 "raid_level": "raid1", 00:11:52.457 "superblock": false, 00:11:52.457 "num_base_bdevs": 4, 00:11:52.457 "num_base_bdevs_discovered": 3, 00:11:52.457 "num_base_bdevs_operational": 3, 00:11:52.457 "process": { 00:11:52.457 "type": "rebuild", 00:11:52.457 "target": "spare", 00:11:52.457 "progress": { 00:11:52.457 "blocks": 26624, 00:11:52.457 "percent": 40 00:11:52.457 } 00:11:52.457 }, 00:11:52.457 "base_bdevs_list": [ 00:11:52.457 { 00:11:52.457 "name": "spare", 00:11:52.457 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:52.457 "is_configured": true, 00:11:52.457 "data_offset": 0, 00:11:52.457 "data_size": 65536 00:11:52.457 }, 00:11:52.457 { 00:11:52.457 "name": null, 00:11:52.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.457 "is_configured": false, 00:11:52.457 "data_offset": 0, 00:11:52.457 "data_size": 65536 00:11:52.457 }, 00:11:52.457 { 00:11:52.457 "name": "BaseBdev3", 00:11:52.457 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:52.457 "is_configured": true, 00:11:52.457 "data_offset": 0, 00:11:52.457 "data_size": 65536 00:11:52.457 }, 00:11:52.457 { 00:11:52.457 "name": "BaseBdev4", 00:11:52.457 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:52.457 "is_configured": true, 00:11:52.457 "data_offset": 0, 00:11:52.457 "data_size": 65536 00:11:52.457 } 00:11:52.457 ] 00:11:52.457 }' 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:52.457 01:55:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.393 01:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.653 "name": "raid_bdev1", 00:11:53.653 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:53.653 "strip_size_kb": 0, 00:11:53.653 "state": "online", 00:11:53.653 "raid_level": "raid1", 00:11:53.653 "superblock": false, 00:11:53.653 "num_base_bdevs": 4, 00:11:53.653 "num_base_bdevs_discovered": 3, 00:11:53.653 "num_base_bdevs_operational": 3, 00:11:53.653 "process": { 00:11:53.653 "type": "rebuild", 00:11:53.653 "target": "spare", 00:11:53.653 "progress": { 00:11:53.653 "blocks": 49152, 00:11:53.653 "percent": 75 00:11:53.653 } 00:11:53.653 }, 00:11:53.653 "base_bdevs_list": [ 00:11:53.653 { 00:11:53.653 "name": "spare", 00:11:53.653 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:53.653 "is_configured": true, 00:11:53.653 "data_offset": 0, 00:11:53.653 "data_size": 65536 00:11:53.653 }, 00:11:53.653 { 00:11:53.653 "name": null, 00:11:53.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.653 "is_configured": false, 00:11:53.653 "data_offset": 0, 00:11:53.653 "data_size": 65536 00:11:53.653 }, 00:11:53.653 { 00:11:53.653 "name": "BaseBdev3", 00:11:53.653 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:53.653 "is_configured": true, 00:11:53.653 "data_offset": 0, 00:11:53.653 "data_size": 65536 00:11:53.653 }, 00:11:53.653 { 00:11:53.653 "name": "BaseBdev4", 00:11:53.653 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:53.653 "is_configured": true, 00:11:53.653 "data_offset": 0, 00:11:53.653 "data_size": 65536 00:11:53.653 } 00:11:53.653 ] 00:11:53.653 }' 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.653 01:55:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:54.222 [2024-12-07 01:55:59.596598] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:54.222 [2024-12-07 01:55:59.596702] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:54.222 [2024-12-07 01:55:59.596761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.791 01:55:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.791 "name": "raid_bdev1", 00:11:54.791 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:54.791 "strip_size_kb": 0, 00:11:54.791 "state": "online", 00:11:54.791 "raid_level": "raid1", 00:11:54.791 "superblock": false, 00:11:54.791 "num_base_bdevs": 4, 00:11:54.791 "num_base_bdevs_discovered": 3, 00:11:54.791 "num_base_bdevs_operational": 3, 00:11:54.791 "base_bdevs_list": [ 00:11:54.791 { 00:11:54.791 "name": "spare", 00:11:54.791 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:54.791 "is_configured": true, 00:11:54.791 "data_offset": 0, 00:11:54.791 "data_size": 65536 00:11:54.791 }, 00:11:54.791 { 00:11:54.791 "name": null, 00:11:54.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.791 "is_configured": false, 00:11:54.791 "data_offset": 0, 00:11:54.791 "data_size": 65536 00:11:54.791 }, 00:11:54.791 { 00:11:54.791 "name": "BaseBdev3", 00:11:54.791 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:54.791 "is_configured": true, 00:11:54.791 "data_offset": 0, 00:11:54.791 "data_size": 65536 00:11:54.791 }, 00:11:54.791 { 00:11:54.791 "name": "BaseBdev4", 00:11:54.791 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:54.791 "is_configured": true, 00:11:54.791 "data_offset": 0, 00:11:54.791 "data_size": 65536 00:11:54.791 } 00:11:54.791 ] 00:11:54.791 }' 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.791 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.791 "name": "raid_bdev1", 00:11:54.791 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:54.791 "strip_size_kb": 0, 00:11:54.791 "state": "online", 00:11:54.791 "raid_level": "raid1", 00:11:54.791 "superblock": false, 00:11:54.792 "num_base_bdevs": 4, 00:11:54.792 "num_base_bdevs_discovered": 3, 00:11:54.792 "num_base_bdevs_operational": 3, 00:11:54.792 "base_bdevs_list": [ 00:11:54.792 { 00:11:54.792 "name": "spare", 00:11:54.792 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:54.792 "is_configured": true, 00:11:54.792 "data_offset": 0, 00:11:54.792 "data_size": 65536 00:11:54.792 }, 00:11:54.792 { 00:11:54.792 "name": null, 00:11:54.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.792 "is_configured": false, 00:11:54.792 "data_offset": 0, 00:11:54.792 "data_size": 65536 00:11:54.792 }, 00:11:54.792 { 00:11:54.792 "name": "BaseBdev3", 00:11:54.792 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:54.792 "is_configured": true, 00:11:54.792 "data_offset": 0, 00:11:54.792 "data_size": 65536 00:11:54.792 }, 00:11:54.792 { 00:11:54.792 "name": "BaseBdev4", 00:11:54.792 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:54.792 "is_configured": true, 00:11:54.792 "data_offset": 0, 00:11:54.792 "data_size": 65536 00:11:54.792 } 00:11:54.792 ] 00:11:54.792 }' 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.792 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.051 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.051 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.051 "name": "raid_bdev1", 00:11:55.051 "uuid": "9f1231d4-8f81-4bac-9651-d130d3ccc863", 00:11:55.051 "strip_size_kb": 0, 00:11:55.051 "state": "online", 00:11:55.051 "raid_level": "raid1", 00:11:55.051 "superblock": false, 00:11:55.051 "num_base_bdevs": 4, 00:11:55.051 "num_base_bdevs_discovered": 3, 00:11:55.051 "num_base_bdevs_operational": 3, 00:11:55.051 "base_bdevs_list": [ 00:11:55.051 { 00:11:55.051 "name": "spare", 00:11:55.051 "uuid": "a87dd271-9f9b-51c6-8ef5-534ec6749aab", 00:11:55.051 "is_configured": true, 00:11:55.051 "data_offset": 0, 00:11:55.051 "data_size": 65536 00:11:55.051 }, 00:11:55.051 { 00:11:55.051 "name": null, 00:11:55.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.051 "is_configured": false, 00:11:55.051 "data_offset": 0, 00:11:55.051 "data_size": 65536 00:11:55.051 }, 00:11:55.051 { 00:11:55.051 "name": "BaseBdev3", 00:11:55.051 "uuid": "78a6447d-09b4-53c5-b5ee-b57a8e9bfc45", 00:11:55.051 "is_configured": true, 00:11:55.051 "data_offset": 0, 00:11:55.051 "data_size": 65536 00:11:55.051 }, 00:11:55.051 { 00:11:55.051 "name": "BaseBdev4", 00:11:55.051 "uuid": "9bf4594a-e6c6-58ed-ac21-5002f4636126", 00:11:55.051 "is_configured": true, 00:11:55.051 "data_offset": 0, 00:11:55.051 "data_size": 65536 00:11:55.051 } 00:11:55.051 ] 00:11:55.051 }' 00:11:55.051 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.051 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.310 [2024-12-07 01:56:00.635142] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.310 [2024-12-07 01:56:00.635172] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.310 [2024-12-07 01:56:00.635252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.310 [2024-12-07 01:56:00.635327] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.310 [2024-12-07 01:56:00.635339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.310 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:55.569 /dev/nbd0 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.569 1+0 records in 00:11:55.569 1+0 records out 00:11:55.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376144 s, 10.9 MB/s 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.569 01:56:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:55.828 /dev/nbd1 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.828 1+0 records in 00:11:55.828 1+0 records out 00:11:55.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310084 s, 13.2 MB/s 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:55.828 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:56.086 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:56.086 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:56.086 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:56.086 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.086 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.087 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:56.087 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:56.087 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.087 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:56.087 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87893 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 87893 ']' 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 87893 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87893 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:56.346 killing process with pid 87893 00:11:56.346 Received shutdown signal, test time was about 60.000000 seconds 00:11:56.346 00:11:56.346 Latency(us) 00:11:56.346 [2024-12-07T01:56:01.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.346 [2024-12-07T01:56:01.808Z] =================================================================================================================== 00:11:56.346 [2024-12-07T01:56:01.808Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87893' 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 87893 00:11:56.346 [2024-12-07 01:56:01.724997] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.346 01:56:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 87893 00:11:56.346 [2024-12-07 01:56:01.775054] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.607 01:56:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:56.607 00:11:56.607 real 0m15.628s 00:11:56.607 user 0m17.406s 00:11:56.607 sys 0m3.174s 00:11:56.607 ************************************ 00:11:56.607 END TEST raid_rebuild_test 00:11:56.607 ************************************ 00:11:56.607 01:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.607 01:56:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.867 01:56:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:56.867 01:56:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:11:56.867 01:56:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.867 01:56:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.867 ************************************ 00:11:56.867 START TEST raid_rebuild_test_sb 00:11:56.867 ************************************ 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88318 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88318 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 88318 ']' 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:56.867 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.867 [2024-12-07 01:56:02.183196] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:56.867 [2024-12-07 01:56:02.183363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:56.867 Zero copy mechanism will not be used. 00:11:56.867 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88318 ] 00:11:56.867 [2024-12-07 01:56:02.308062] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.126 [2024-12-07 01:56:02.352172] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.126 [2024-12-07 01:56:02.393169] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.126 [2024-12-07 01:56:02.393281] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.694 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:57.694 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:57.694 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.694 01:56:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.694 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 BaseBdev1_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 [2024-12-07 01:56:03.022554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:57.694 [2024-12-07 01:56:03.022607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.694 [2024-12-07 01:56:03.022628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:57.694 [2024-12-07 01:56:03.022649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.694 [2024-12-07 01:56:03.024716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.694 [2024-12-07 01:56:03.024803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.694 BaseBdev1 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 BaseBdev2_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 [2024-12-07 01:56:03.059348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:57.694 [2024-12-07 01:56:03.059445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.694 [2024-12-07 01:56:03.059473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:57.694 [2024-12-07 01:56:03.059483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.694 [2024-12-07 01:56:03.061525] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.694 [2024-12-07 01:56:03.061562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.694 BaseBdev2 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 BaseBdev3_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 [2024-12-07 01:56:03.087572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:57.694 [2024-12-07 01:56:03.087626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.694 [2024-12-07 01:56:03.087652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:57.694 [2024-12-07 01:56:03.087674] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.694 [2024-12-07 01:56:03.089693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.694 [2024-12-07 01:56:03.089723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.694 BaseBdev3 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 BaseBdev4_malloc 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.694 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.694 [2024-12-07 01:56:03.115753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:57.694 [2024-12-07 01:56:03.115794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.694 [2024-12-07 01:56:03.115813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:57.695 [2024-12-07 01:56:03.115821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.695 [2024-12-07 01:56:03.117871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.695 [2024-12-07 01:56:03.117905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:57.695 BaseBdev4 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.695 spare_malloc 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.695 spare_delay 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.695 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.953 [2024-12-07 01:56:03.155865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.953 [2024-12-07 01:56:03.155947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.953 [2024-12-07 01:56:03.155969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:57.953 [2024-12-07 01:56:03.155978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.953 [2024-12-07 01:56:03.158135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.953 [2024-12-07 01:56:03.158171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.953 spare 00:11:57.953 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.954 [2024-12-07 01:56:03.167921] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.954 [2024-12-07 01:56:03.169724] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.954 [2024-12-07 01:56:03.169784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.954 [2024-12-07 01:56:03.169831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.954 [2024-12-07 01:56:03.169986] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:57.954 [2024-12-07 01:56:03.169996] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.954 [2024-12-07 01:56:03.170245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:57.954 [2024-12-07 01:56:03.170387] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:57.954 [2024-12-07 01:56:03.170400] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:57.954 [2024-12-07 01:56:03.170507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.954 "name": "raid_bdev1", 00:11:57.954 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:11:57.954 "strip_size_kb": 0, 00:11:57.954 "state": "online", 00:11:57.954 "raid_level": "raid1", 00:11:57.954 "superblock": true, 00:11:57.954 "num_base_bdevs": 4, 00:11:57.954 "num_base_bdevs_discovered": 4, 00:11:57.954 "num_base_bdevs_operational": 4, 00:11:57.954 "base_bdevs_list": [ 00:11:57.954 { 00:11:57.954 "name": "BaseBdev1", 00:11:57.954 "uuid": "a7992e8b-2205-52dd-bb80-dbda1e0fe370", 00:11:57.954 "is_configured": true, 00:11:57.954 "data_offset": 2048, 00:11:57.954 "data_size": 63488 00:11:57.954 }, 00:11:57.954 { 00:11:57.954 "name": "BaseBdev2", 00:11:57.954 "uuid": "4535e2e4-f0e1-5f36-bca8-23601264b300", 00:11:57.954 "is_configured": true, 00:11:57.954 "data_offset": 2048, 00:11:57.954 "data_size": 63488 00:11:57.954 }, 00:11:57.954 { 00:11:57.954 "name": "BaseBdev3", 00:11:57.954 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:11:57.954 "is_configured": true, 00:11:57.954 "data_offset": 2048, 00:11:57.954 "data_size": 63488 00:11:57.954 }, 00:11:57.954 { 00:11:57.954 "name": "BaseBdev4", 00:11:57.954 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:11:57.954 "is_configured": true, 00:11:57.954 "data_offset": 2048, 00:11:57.954 "data_size": 63488 00:11:57.954 } 00:11:57.954 ] 00:11:57.954 }' 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.954 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.213 [2024-12-07 01:56:03.607505] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.213 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:58.472 [2024-12-07 01:56:03.878779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:58.472 /dev/nbd0 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:11:58.472 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.731 1+0 records in 00:11:58.731 1+0 records out 00:11:58.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459143 s, 8.9 MB/s 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:58.731 01:56:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:04.035 63488+0 records in 00:12:04.035 63488+0 records out 00:12:04.035 32505856 bytes (33 MB, 31 MiB) copied, 4.92919 s, 6.6 MB/s 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.035 01:56:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:04.035 [2024-12-07 01:56:09.073036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.035 [2024-12-07 01:56:09.105048] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.035 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.036 "name": "raid_bdev1", 00:12:04.036 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:04.036 "strip_size_kb": 0, 00:12:04.036 "state": "online", 00:12:04.036 "raid_level": "raid1", 00:12:04.036 "superblock": true, 00:12:04.036 "num_base_bdevs": 4, 00:12:04.036 "num_base_bdevs_discovered": 3, 00:12:04.036 "num_base_bdevs_operational": 3, 00:12:04.036 "base_bdevs_list": [ 00:12:04.036 { 00:12:04.036 "name": null, 00:12:04.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.036 "is_configured": false, 00:12:04.036 "data_offset": 0, 00:12:04.036 "data_size": 63488 00:12:04.036 }, 00:12:04.036 { 00:12:04.036 "name": "BaseBdev2", 00:12:04.036 "uuid": "4535e2e4-f0e1-5f36-bca8-23601264b300", 00:12:04.036 "is_configured": true, 00:12:04.036 "data_offset": 2048, 00:12:04.036 "data_size": 63488 00:12:04.036 }, 00:12:04.036 { 00:12:04.036 "name": "BaseBdev3", 00:12:04.036 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:04.036 "is_configured": true, 00:12:04.036 "data_offset": 2048, 00:12:04.036 "data_size": 63488 00:12:04.036 }, 00:12:04.036 { 00:12:04.036 "name": "BaseBdev4", 00:12:04.036 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:04.036 "is_configured": true, 00:12:04.036 "data_offset": 2048, 00:12:04.036 "data_size": 63488 00:12:04.036 } 00:12:04.036 ] 00:12:04.036 }' 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.036 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.295 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:04.295 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.295 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.295 [2024-12-07 01:56:09.592257] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:04.295 [2024-12-07 01:56:09.595521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:12:04.295 [2024-12-07 01:56:09.597418] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.295 01:56:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.295 01:56:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.229 "name": "raid_bdev1", 00:12:05.229 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:05.229 "strip_size_kb": 0, 00:12:05.229 "state": "online", 00:12:05.229 "raid_level": "raid1", 00:12:05.229 "superblock": true, 00:12:05.229 "num_base_bdevs": 4, 00:12:05.229 "num_base_bdevs_discovered": 4, 00:12:05.229 "num_base_bdevs_operational": 4, 00:12:05.229 "process": { 00:12:05.229 "type": "rebuild", 00:12:05.229 "target": "spare", 00:12:05.229 "progress": { 00:12:05.229 "blocks": 20480, 00:12:05.229 "percent": 32 00:12:05.229 } 00:12:05.229 }, 00:12:05.229 "base_bdevs_list": [ 00:12:05.229 { 00:12:05.229 "name": "spare", 00:12:05.229 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:05.229 "is_configured": true, 00:12:05.229 "data_offset": 2048, 00:12:05.229 "data_size": 63488 00:12:05.229 }, 00:12:05.229 { 00:12:05.229 "name": "BaseBdev2", 00:12:05.229 "uuid": "4535e2e4-f0e1-5f36-bca8-23601264b300", 00:12:05.229 "is_configured": true, 00:12:05.229 "data_offset": 2048, 00:12:05.229 "data_size": 63488 00:12:05.229 }, 00:12:05.229 { 00:12:05.229 "name": "BaseBdev3", 00:12:05.229 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:05.229 "is_configured": true, 00:12:05.229 "data_offset": 2048, 00:12:05.229 "data_size": 63488 00:12:05.229 }, 00:12:05.229 { 00:12:05.229 "name": "BaseBdev4", 00:12:05.229 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:05.229 "is_configured": true, 00:12:05.229 "data_offset": 2048, 00:12:05.229 "data_size": 63488 00:12:05.229 } 00:12:05.229 ] 00:12:05.229 }' 00:12:05.229 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.488 [2024-12-07 01:56:10.756203] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.488 [2024-12-07 01:56:10.802081] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:05.488 [2024-12-07 01:56:10.802140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.488 [2024-12-07 01:56:10.802159] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:05.488 [2024-12-07 01:56:10.802167] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.488 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.488 "name": "raid_bdev1", 00:12:05.488 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:05.488 "strip_size_kb": 0, 00:12:05.488 "state": "online", 00:12:05.488 "raid_level": "raid1", 00:12:05.488 "superblock": true, 00:12:05.488 "num_base_bdevs": 4, 00:12:05.488 "num_base_bdevs_discovered": 3, 00:12:05.488 "num_base_bdevs_operational": 3, 00:12:05.488 "base_bdevs_list": [ 00:12:05.488 { 00:12:05.488 "name": null, 00:12:05.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.488 "is_configured": false, 00:12:05.488 "data_offset": 0, 00:12:05.488 "data_size": 63488 00:12:05.488 }, 00:12:05.488 { 00:12:05.488 "name": "BaseBdev2", 00:12:05.488 "uuid": "4535e2e4-f0e1-5f36-bca8-23601264b300", 00:12:05.488 "is_configured": true, 00:12:05.488 "data_offset": 2048, 00:12:05.488 "data_size": 63488 00:12:05.488 }, 00:12:05.488 { 00:12:05.488 "name": "BaseBdev3", 00:12:05.488 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:05.488 "is_configured": true, 00:12:05.488 "data_offset": 2048, 00:12:05.488 "data_size": 63488 00:12:05.489 }, 00:12:05.489 { 00:12:05.489 "name": "BaseBdev4", 00:12:05.489 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:05.489 "is_configured": true, 00:12:05.489 "data_offset": 2048, 00:12:05.489 "data_size": 63488 00:12:05.489 } 00:12:05.489 ] 00:12:05.489 }' 00:12:05.489 01:56:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.489 01:56:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.056 "name": "raid_bdev1", 00:12:06.056 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:06.056 "strip_size_kb": 0, 00:12:06.056 "state": "online", 00:12:06.056 "raid_level": "raid1", 00:12:06.056 "superblock": true, 00:12:06.056 "num_base_bdevs": 4, 00:12:06.056 "num_base_bdevs_discovered": 3, 00:12:06.056 "num_base_bdevs_operational": 3, 00:12:06.056 "base_bdevs_list": [ 00:12:06.056 { 00:12:06.056 "name": null, 00:12:06.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:06.056 "is_configured": false, 00:12:06.056 "data_offset": 0, 00:12:06.056 "data_size": 63488 00:12:06.056 }, 00:12:06.056 { 00:12:06.056 "name": "BaseBdev2", 00:12:06.056 "uuid": "4535e2e4-f0e1-5f36-bca8-23601264b300", 00:12:06.056 "is_configured": true, 00:12:06.056 "data_offset": 2048, 00:12:06.056 "data_size": 63488 00:12:06.056 }, 00:12:06.056 { 00:12:06.056 "name": "BaseBdev3", 00:12:06.056 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:06.056 "is_configured": true, 00:12:06.056 "data_offset": 2048, 00:12:06.056 "data_size": 63488 00:12:06.056 }, 00:12:06.056 { 00:12:06.056 "name": "BaseBdev4", 00:12:06.056 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:06.056 "is_configured": true, 00:12:06.056 "data_offset": 2048, 00:12:06.056 "data_size": 63488 00:12:06.056 } 00:12:06.056 ] 00:12:06.056 }' 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.056 [2024-12-07 01:56:11.401283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:06.056 [2024-12-07 01:56:11.404592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:12:06.056 [2024-12-07 01:56:11.406516] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.056 01:56:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.992 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.251 "name": "raid_bdev1", 00:12:07.251 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:07.251 "strip_size_kb": 0, 00:12:07.251 "state": "online", 00:12:07.251 "raid_level": "raid1", 00:12:07.251 "superblock": true, 00:12:07.251 "num_base_bdevs": 4, 00:12:07.251 "num_base_bdevs_discovered": 4, 00:12:07.251 "num_base_bdevs_operational": 4, 00:12:07.251 "process": { 00:12:07.251 "type": "rebuild", 00:12:07.251 "target": "spare", 00:12:07.251 "progress": { 00:12:07.251 "blocks": 20480, 00:12:07.251 "percent": 32 00:12:07.251 } 00:12:07.251 }, 00:12:07.251 "base_bdevs_list": [ 00:12:07.251 { 00:12:07.251 "name": "spare", 00:12:07.251 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:07.251 "is_configured": true, 00:12:07.251 "data_offset": 2048, 00:12:07.251 "data_size": 63488 00:12:07.251 }, 00:12:07.251 { 00:12:07.251 "name": "BaseBdev2", 00:12:07.251 "uuid": "4535e2e4-f0e1-5f36-bca8-23601264b300", 00:12:07.251 "is_configured": true, 00:12:07.251 "data_offset": 2048, 00:12:07.251 "data_size": 63488 00:12:07.251 }, 00:12:07.251 { 00:12:07.251 "name": "BaseBdev3", 00:12:07.251 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:07.251 "is_configured": true, 00:12:07.251 "data_offset": 2048, 00:12:07.251 "data_size": 63488 00:12:07.251 }, 00:12:07.251 { 00:12:07.251 "name": "BaseBdev4", 00:12:07.251 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:07.251 "is_configured": true, 00:12:07.251 "data_offset": 2048, 00:12:07.251 "data_size": 63488 00:12:07.251 } 00:12:07.251 ] 00:12:07.251 }' 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:07.251 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.251 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.251 [2024-12-07 01:56:12.549242] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:07.251 [2024-12-07 01:56:12.710328] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:12:07.509 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.509 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.510 "name": "raid_bdev1", 00:12:07.510 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:07.510 "strip_size_kb": 0, 00:12:07.510 "state": "online", 00:12:07.510 "raid_level": "raid1", 00:12:07.510 "superblock": true, 00:12:07.510 "num_base_bdevs": 4, 00:12:07.510 "num_base_bdevs_discovered": 3, 00:12:07.510 "num_base_bdevs_operational": 3, 00:12:07.510 "process": { 00:12:07.510 "type": "rebuild", 00:12:07.510 "target": "spare", 00:12:07.510 "progress": { 00:12:07.510 "blocks": 24576, 00:12:07.510 "percent": 38 00:12:07.510 } 00:12:07.510 }, 00:12:07.510 "base_bdevs_list": [ 00:12:07.510 { 00:12:07.510 "name": "spare", 00:12:07.510 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 2048, 00:12:07.510 "data_size": 63488 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": null, 00:12:07.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.510 "is_configured": false, 00:12:07.510 "data_offset": 0, 00:12:07.510 "data_size": 63488 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": "BaseBdev3", 00:12:07.510 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 2048, 00:12:07.510 "data_size": 63488 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": "BaseBdev4", 00:12:07.510 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 2048, 00:12:07.510 "data_size": 63488 00:12:07.510 } 00:12:07.510 ] 00:12:07.510 }' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.510 "name": "raid_bdev1", 00:12:07.510 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:07.510 "strip_size_kb": 0, 00:12:07.510 "state": "online", 00:12:07.510 "raid_level": "raid1", 00:12:07.510 "superblock": true, 00:12:07.510 "num_base_bdevs": 4, 00:12:07.510 "num_base_bdevs_discovered": 3, 00:12:07.510 "num_base_bdevs_operational": 3, 00:12:07.510 "process": { 00:12:07.510 "type": "rebuild", 00:12:07.510 "target": "spare", 00:12:07.510 "progress": { 00:12:07.510 "blocks": 26624, 00:12:07.510 "percent": 41 00:12:07.510 } 00:12:07.510 }, 00:12:07.510 "base_bdevs_list": [ 00:12:07.510 { 00:12:07.510 "name": "spare", 00:12:07.510 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 2048, 00:12:07.510 "data_size": 63488 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": null, 00:12:07.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.510 "is_configured": false, 00:12:07.510 "data_offset": 0, 00:12:07.510 "data_size": 63488 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": "BaseBdev3", 00:12:07.510 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 2048, 00:12:07.510 "data_size": 63488 00:12:07.510 }, 00:12:07.510 { 00:12:07.510 "name": "BaseBdev4", 00:12:07.510 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:07.510 "is_configured": true, 00:12:07.510 "data_offset": 2048, 00:12:07.510 "data_size": 63488 00:12:07.510 } 00:12:07.510 ] 00:12:07.510 }' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:07.510 01:56:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.769 01:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:07.769 01:56:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:08.705 "name": "raid_bdev1", 00:12:08.705 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:08.705 "strip_size_kb": 0, 00:12:08.705 "state": "online", 00:12:08.705 "raid_level": "raid1", 00:12:08.705 "superblock": true, 00:12:08.705 "num_base_bdevs": 4, 00:12:08.705 "num_base_bdevs_discovered": 3, 00:12:08.705 "num_base_bdevs_operational": 3, 00:12:08.705 "process": { 00:12:08.705 "type": "rebuild", 00:12:08.705 "target": "spare", 00:12:08.705 "progress": { 00:12:08.705 "blocks": 51200, 00:12:08.705 "percent": 80 00:12:08.705 } 00:12:08.705 }, 00:12:08.705 "base_bdevs_list": [ 00:12:08.705 { 00:12:08.705 "name": "spare", 00:12:08.705 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:08.705 "is_configured": true, 00:12:08.705 "data_offset": 2048, 00:12:08.705 "data_size": 63488 00:12:08.705 }, 00:12:08.705 { 00:12:08.705 "name": null, 00:12:08.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.705 "is_configured": false, 00:12:08.705 "data_offset": 0, 00:12:08.705 "data_size": 63488 00:12:08.705 }, 00:12:08.705 { 00:12:08.705 "name": "BaseBdev3", 00:12:08.705 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:08.705 "is_configured": true, 00:12:08.705 "data_offset": 2048, 00:12:08.705 "data_size": 63488 00:12:08.705 }, 00:12:08.705 { 00:12:08.705 "name": "BaseBdev4", 00:12:08.705 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:08.705 "is_configured": true, 00:12:08.705 "data_offset": 2048, 00:12:08.705 "data_size": 63488 00:12:08.705 } 00:12:08.705 ] 00:12:08.705 }' 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:08.705 01:56:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:09.271 [2024-12-07 01:56:14.616969] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:09.271 [2024-12-07 01:56:14.617041] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:09.271 [2024-12-07 01:56:14.617137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.842 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.842 "name": "raid_bdev1", 00:12:09.842 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:09.842 "strip_size_kb": 0, 00:12:09.842 "state": "online", 00:12:09.842 "raid_level": "raid1", 00:12:09.842 "superblock": true, 00:12:09.842 "num_base_bdevs": 4, 00:12:09.842 "num_base_bdevs_discovered": 3, 00:12:09.842 "num_base_bdevs_operational": 3, 00:12:09.842 "base_bdevs_list": [ 00:12:09.842 { 00:12:09.842 "name": "spare", 00:12:09.842 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:09.842 "is_configured": true, 00:12:09.842 "data_offset": 2048, 00:12:09.842 "data_size": 63488 00:12:09.842 }, 00:12:09.842 { 00:12:09.842 "name": null, 00:12:09.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.842 "is_configured": false, 00:12:09.842 "data_offset": 0, 00:12:09.842 "data_size": 63488 00:12:09.842 }, 00:12:09.842 { 00:12:09.843 "name": "BaseBdev3", 00:12:09.843 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:09.843 "is_configured": true, 00:12:09.843 "data_offset": 2048, 00:12:09.843 "data_size": 63488 00:12:09.843 }, 00:12:09.843 { 00:12:09.843 "name": "BaseBdev4", 00:12:09.843 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:09.843 "is_configured": true, 00:12:09.843 "data_offset": 2048, 00:12:09.843 "data_size": 63488 00:12:09.843 } 00:12:09.843 ] 00:12:09.843 }' 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.843 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.099 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:10.099 "name": "raid_bdev1", 00:12:10.099 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:10.099 "strip_size_kb": 0, 00:12:10.099 "state": "online", 00:12:10.099 "raid_level": "raid1", 00:12:10.099 "superblock": true, 00:12:10.099 "num_base_bdevs": 4, 00:12:10.099 "num_base_bdevs_discovered": 3, 00:12:10.099 "num_base_bdevs_operational": 3, 00:12:10.099 "base_bdevs_list": [ 00:12:10.099 { 00:12:10.099 "name": "spare", 00:12:10.099 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:10.099 "is_configured": true, 00:12:10.100 "data_offset": 2048, 00:12:10.100 "data_size": 63488 00:12:10.100 }, 00:12:10.100 { 00:12:10.100 "name": null, 00:12:10.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.100 "is_configured": false, 00:12:10.100 "data_offset": 0, 00:12:10.100 "data_size": 63488 00:12:10.100 }, 00:12:10.100 { 00:12:10.100 "name": "BaseBdev3", 00:12:10.100 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:10.100 "is_configured": true, 00:12:10.100 "data_offset": 2048, 00:12:10.100 "data_size": 63488 00:12:10.100 }, 00:12:10.100 { 00:12:10.100 "name": "BaseBdev4", 00:12:10.100 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:10.100 "is_configured": true, 00:12:10.100 "data_offset": 2048, 00:12:10.100 "data_size": 63488 00:12:10.100 } 00:12:10.100 ] 00:12:10.100 }' 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.100 "name": "raid_bdev1", 00:12:10.100 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:10.100 "strip_size_kb": 0, 00:12:10.100 "state": "online", 00:12:10.100 "raid_level": "raid1", 00:12:10.100 "superblock": true, 00:12:10.100 "num_base_bdevs": 4, 00:12:10.100 "num_base_bdevs_discovered": 3, 00:12:10.100 "num_base_bdevs_operational": 3, 00:12:10.100 "base_bdevs_list": [ 00:12:10.100 { 00:12:10.100 "name": "spare", 00:12:10.100 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:10.100 "is_configured": true, 00:12:10.100 "data_offset": 2048, 00:12:10.100 "data_size": 63488 00:12:10.100 }, 00:12:10.100 { 00:12:10.100 "name": null, 00:12:10.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.100 "is_configured": false, 00:12:10.100 "data_offset": 0, 00:12:10.100 "data_size": 63488 00:12:10.100 }, 00:12:10.100 { 00:12:10.100 "name": "BaseBdev3", 00:12:10.100 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:10.100 "is_configured": true, 00:12:10.100 "data_offset": 2048, 00:12:10.100 "data_size": 63488 00:12:10.100 }, 00:12:10.100 { 00:12:10.100 "name": "BaseBdev4", 00:12:10.100 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:10.100 "is_configured": true, 00:12:10.100 "data_offset": 2048, 00:12:10.100 "data_size": 63488 00:12:10.100 } 00:12:10.100 ] 00:12:10.100 }' 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.100 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.664 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.664 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.664 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.664 [2024-12-07 01:56:15.866844] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.664 [2024-12-07 01:56:15.866916] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.664 [2024-12-07 01:56:15.867027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.664 [2024-12-07 01:56:15.867152] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.664 [2024-12-07 01:56:15.867210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:10.664 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.664 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.664 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.665 01:56:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:10.923 /dev/nbd0 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:10.923 1+0 records in 00:12:10.923 1+0 records out 00:12:10.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000826311 s, 5.0 MB/s 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:10.923 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:11.181 /dev/nbd1 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:11.181 1+0 records in 00:12:11.181 1+0 records out 00:12:11.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290204 s, 14.1 MB/s 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.181 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.439 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.698 [2024-12-07 01:56:16.902779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:11.698 [2024-12-07 01:56:16.902884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.698 [2024-12-07 01:56:16.902929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:11.698 [2024-12-07 01:56:16.902964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.698 [2024-12-07 01:56:16.905132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.698 [2024-12-07 01:56:16.905221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:11.698 [2024-12-07 01:56:16.905327] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:11.698 [2024-12-07 01:56:16.905393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.698 [2024-12-07 01:56:16.905526] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.698 [2024-12-07 01:56:16.905682] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.698 spare 00:12:11.698 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.698 01:56:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:11.698 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.698 01:56:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.698 [2024-12-07 01:56:17.005615] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:11.698 [2024-12-07 01:56:17.005694] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.698 [2024-12-07 01:56:17.005996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:12:11.698 [2024-12-07 01:56:17.006167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:11.698 [2024-12-07 01:56:17.006209] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:11.698 [2024-12-07 01:56:17.006368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.698 "name": "raid_bdev1", 00:12:11.698 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:11.698 "strip_size_kb": 0, 00:12:11.698 "state": "online", 00:12:11.698 "raid_level": "raid1", 00:12:11.698 "superblock": true, 00:12:11.698 "num_base_bdevs": 4, 00:12:11.698 "num_base_bdevs_discovered": 3, 00:12:11.698 "num_base_bdevs_operational": 3, 00:12:11.698 "base_bdevs_list": [ 00:12:11.698 { 00:12:11.698 "name": "spare", 00:12:11.698 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:11.698 "is_configured": true, 00:12:11.698 "data_offset": 2048, 00:12:11.698 "data_size": 63488 00:12:11.698 }, 00:12:11.698 { 00:12:11.698 "name": null, 00:12:11.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.698 "is_configured": false, 00:12:11.698 "data_offset": 2048, 00:12:11.698 "data_size": 63488 00:12:11.698 }, 00:12:11.698 { 00:12:11.698 "name": "BaseBdev3", 00:12:11.698 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:11.698 "is_configured": true, 00:12:11.698 "data_offset": 2048, 00:12:11.698 "data_size": 63488 00:12:11.698 }, 00:12:11.698 { 00:12:11.698 "name": "BaseBdev4", 00:12:11.698 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:11.698 "is_configured": true, 00:12:11.698 "data_offset": 2048, 00:12:11.698 "data_size": 63488 00:12:11.698 } 00:12:11.698 ] 00:12:11.698 }' 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.698 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.317 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.317 "name": "raid_bdev1", 00:12:12.317 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:12.317 "strip_size_kb": 0, 00:12:12.317 "state": "online", 00:12:12.317 "raid_level": "raid1", 00:12:12.317 "superblock": true, 00:12:12.317 "num_base_bdevs": 4, 00:12:12.317 "num_base_bdevs_discovered": 3, 00:12:12.317 "num_base_bdevs_operational": 3, 00:12:12.317 "base_bdevs_list": [ 00:12:12.317 { 00:12:12.317 "name": "spare", 00:12:12.317 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:12.317 "is_configured": true, 00:12:12.317 "data_offset": 2048, 00:12:12.317 "data_size": 63488 00:12:12.317 }, 00:12:12.317 { 00:12:12.317 "name": null, 00:12:12.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.318 "is_configured": false, 00:12:12.318 "data_offset": 2048, 00:12:12.318 "data_size": 63488 00:12:12.318 }, 00:12:12.318 { 00:12:12.318 "name": "BaseBdev3", 00:12:12.318 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:12.318 "is_configured": true, 00:12:12.318 "data_offset": 2048, 00:12:12.318 "data_size": 63488 00:12:12.318 }, 00:12:12.318 { 00:12:12.318 "name": "BaseBdev4", 00:12:12.318 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:12.318 "is_configured": true, 00:12:12.318 "data_offset": 2048, 00:12:12.318 "data_size": 63488 00:12:12.318 } 00:12:12.318 ] 00:12:12.318 }' 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.318 [2024-12-07 01:56:17.669527] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.318 "name": "raid_bdev1", 00:12:12.318 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:12.318 "strip_size_kb": 0, 00:12:12.318 "state": "online", 00:12:12.318 "raid_level": "raid1", 00:12:12.318 "superblock": true, 00:12:12.318 "num_base_bdevs": 4, 00:12:12.318 "num_base_bdevs_discovered": 2, 00:12:12.318 "num_base_bdevs_operational": 2, 00:12:12.318 "base_bdevs_list": [ 00:12:12.318 { 00:12:12.318 "name": null, 00:12:12.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.318 "is_configured": false, 00:12:12.318 "data_offset": 0, 00:12:12.318 "data_size": 63488 00:12:12.318 }, 00:12:12.318 { 00:12:12.318 "name": null, 00:12:12.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.318 "is_configured": false, 00:12:12.318 "data_offset": 2048, 00:12:12.318 "data_size": 63488 00:12:12.318 }, 00:12:12.318 { 00:12:12.318 "name": "BaseBdev3", 00:12:12.318 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:12.318 "is_configured": true, 00:12:12.318 "data_offset": 2048, 00:12:12.318 "data_size": 63488 00:12:12.318 }, 00:12:12.318 { 00:12:12.318 "name": "BaseBdev4", 00:12:12.318 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:12.318 "is_configured": true, 00:12:12.318 "data_offset": 2048, 00:12:12.318 "data_size": 63488 00:12:12.318 } 00:12:12.318 ] 00:12:12.318 }' 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.318 01:56:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.883 01:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.883 01:56:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.883 01:56:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.883 [2024-12-07 01:56:18.104804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.883 [2024-12-07 01:56:18.105049] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:12.883 [2024-12-07 01:56:18.105119] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:12.883 [2024-12-07 01:56:18.105222] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.883 [2024-12-07 01:56:18.108299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:12.883 [2024-12-07 01:56:18.110161] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.883 01:56:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.883 01:56:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.817 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.817 "name": "raid_bdev1", 00:12:13.817 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:13.817 "strip_size_kb": 0, 00:12:13.817 "state": "online", 00:12:13.817 "raid_level": "raid1", 00:12:13.817 "superblock": true, 00:12:13.817 "num_base_bdevs": 4, 00:12:13.817 "num_base_bdevs_discovered": 3, 00:12:13.817 "num_base_bdevs_operational": 3, 00:12:13.817 "process": { 00:12:13.817 "type": "rebuild", 00:12:13.817 "target": "spare", 00:12:13.817 "progress": { 00:12:13.817 "blocks": 20480, 00:12:13.817 "percent": 32 00:12:13.817 } 00:12:13.817 }, 00:12:13.817 "base_bdevs_list": [ 00:12:13.817 { 00:12:13.817 "name": "spare", 00:12:13.817 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:13.817 "is_configured": true, 00:12:13.817 "data_offset": 2048, 00:12:13.817 "data_size": 63488 00:12:13.817 }, 00:12:13.817 { 00:12:13.817 "name": null, 00:12:13.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.817 "is_configured": false, 00:12:13.817 "data_offset": 2048, 00:12:13.817 "data_size": 63488 00:12:13.817 }, 00:12:13.817 { 00:12:13.817 "name": "BaseBdev3", 00:12:13.817 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:13.817 "is_configured": true, 00:12:13.817 "data_offset": 2048, 00:12:13.817 "data_size": 63488 00:12:13.817 }, 00:12:13.817 { 00:12:13.817 "name": "BaseBdev4", 00:12:13.817 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:13.817 "is_configured": true, 00:12:13.817 "data_offset": 2048, 00:12:13.817 "data_size": 63488 00:12:13.817 } 00:12:13.818 ] 00:12:13.818 }' 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.818 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.076 [2024-12-07 01:56:19.277765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.076 [2024-12-07 01:56:19.314380] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:14.076 [2024-12-07 01:56:19.314503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.076 [2024-12-07 01:56:19.314539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:14.076 [2024-12-07 01:56:19.314562] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.076 "name": "raid_bdev1", 00:12:14.076 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:14.076 "strip_size_kb": 0, 00:12:14.076 "state": "online", 00:12:14.076 "raid_level": "raid1", 00:12:14.076 "superblock": true, 00:12:14.076 "num_base_bdevs": 4, 00:12:14.076 "num_base_bdevs_discovered": 2, 00:12:14.076 "num_base_bdevs_operational": 2, 00:12:14.076 "base_bdevs_list": [ 00:12:14.076 { 00:12:14.076 "name": null, 00:12:14.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.076 "is_configured": false, 00:12:14.076 "data_offset": 0, 00:12:14.076 "data_size": 63488 00:12:14.076 }, 00:12:14.076 { 00:12:14.076 "name": null, 00:12:14.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.076 "is_configured": false, 00:12:14.076 "data_offset": 2048, 00:12:14.076 "data_size": 63488 00:12:14.076 }, 00:12:14.076 { 00:12:14.076 "name": "BaseBdev3", 00:12:14.076 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:14.076 "is_configured": true, 00:12:14.076 "data_offset": 2048, 00:12:14.076 "data_size": 63488 00:12:14.076 }, 00:12:14.076 { 00:12:14.076 "name": "BaseBdev4", 00:12:14.076 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:14.076 "is_configured": true, 00:12:14.076 "data_offset": 2048, 00:12:14.076 "data_size": 63488 00:12:14.076 } 00:12:14.076 ] 00:12:14.076 }' 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.076 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.334 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:14.334 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.334 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.334 [2024-12-07 01:56:19.761738] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:14.334 [2024-12-07 01:56:19.761804] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.334 [2024-12-07 01:56:19.761838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:14.334 [2024-12-07 01:56:19.761852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.334 [2024-12-07 01:56:19.762323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.334 [2024-12-07 01:56:19.762356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:14.334 [2024-12-07 01:56:19.762454] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:14.334 [2024-12-07 01:56:19.762481] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:14.334 [2024-12-07 01:56:19.762490] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:14.334 [2024-12-07 01:56:19.762529] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.334 [2024-12-07 01:56:19.765676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:14.334 spare 00:12:14.334 01:56:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.334 01:56:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:14.335 [2024-12-07 01:56:19.767554] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.714 "name": "raid_bdev1", 00:12:15.714 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:15.714 "strip_size_kb": 0, 00:12:15.714 "state": "online", 00:12:15.714 "raid_level": "raid1", 00:12:15.714 "superblock": true, 00:12:15.714 "num_base_bdevs": 4, 00:12:15.714 "num_base_bdevs_discovered": 3, 00:12:15.714 "num_base_bdevs_operational": 3, 00:12:15.714 "process": { 00:12:15.714 "type": "rebuild", 00:12:15.714 "target": "spare", 00:12:15.714 "progress": { 00:12:15.714 "blocks": 20480, 00:12:15.714 "percent": 32 00:12:15.714 } 00:12:15.714 }, 00:12:15.714 "base_bdevs_list": [ 00:12:15.714 { 00:12:15.714 "name": "spare", 00:12:15.714 "uuid": "6b03bf76-2c32-58d5-b93c-7912597c28d0", 00:12:15.714 "is_configured": true, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 }, 00:12:15.714 { 00:12:15.714 "name": null, 00:12:15.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.714 "is_configured": false, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 }, 00:12:15.714 { 00:12:15.714 "name": "BaseBdev3", 00:12:15.714 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:15.714 "is_configured": true, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 }, 00:12:15.714 { 00:12:15.714 "name": "BaseBdev4", 00:12:15.714 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:15.714 "is_configured": true, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 } 00:12:15.714 ] 00:12:15.714 }' 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.714 [2024-12-07 01:56:20.912435] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:15.714 [2024-12-07 01:56:20.971466] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:15.714 [2024-12-07 01:56:20.971520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.714 [2024-12-07 01:56:20.971539] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:15.714 [2024-12-07 01:56:20.971546] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.714 01:56:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.714 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.714 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.714 "name": "raid_bdev1", 00:12:15.714 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:15.714 "strip_size_kb": 0, 00:12:15.714 "state": "online", 00:12:15.714 "raid_level": "raid1", 00:12:15.714 "superblock": true, 00:12:15.714 "num_base_bdevs": 4, 00:12:15.714 "num_base_bdevs_discovered": 2, 00:12:15.714 "num_base_bdevs_operational": 2, 00:12:15.714 "base_bdevs_list": [ 00:12:15.714 { 00:12:15.714 "name": null, 00:12:15.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.714 "is_configured": false, 00:12:15.714 "data_offset": 0, 00:12:15.714 "data_size": 63488 00:12:15.714 }, 00:12:15.714 { 00:12:15.714 "name": null, 00:12:15.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.714 "is_configured": false, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 }, 00:12:15.714 { 00:12:15.714 "name": "BaseBdev3", 00:12:15.714 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:15.714 "is_configured": true, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 }, 00:12:15.714 { 00:12:15.714 "name": "BaseBdev4", 00:12:15.714 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:15.714 "is_configured": true, 00:12:15.714 "data_offset": 2048, 00:12:15.714 "data_size": 63488 00:12:15.714 } 00:12:15.714 ] 00:12:15.714 }' 00:12:15.714 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.714 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.976 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.235 "name": "raid_bdev1", 00:12:16.235 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:16.235 "strip_size_kb": 0, 00:12:16.235 "state": "online", 00:12:16.235 "raid_level": "raid1", 00:12:16.235 "superblock": true, 00:12:16.235 "num_base_bdevs": 4, 00:12:16.235 "num_base_bdevs_discovered": 2, 00:12:16.235 "num_base_bdevs_operational": 2, 00:12:16.235 "base_bdevs_list": [ 00:12:16.235 { 00:12:16.235 "name": null, 00:12:16.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.235 "is_configured": false, 00:12:16.235 "data_offset": 0, 00:12:16.235 "data_size": 63488 00:12:16.235 }, 00:12:16.235 { 00:12:16.235 "name": null, 00:12:16.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.235 "is_configured": false, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 }, 00:12:16.235 { 00:12:16.235 "name": "BaseBdev3", 00:12:16.235 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:16.235 "is_configured": true, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 }, 00:12:16.235 { 00:12:16.235 "name": "BaseBdev4", 00:12:16.235 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:16.235 "is_configured": true, 00:12:16.235 "data_offset": 2048, 00:12:16.235 "data_size": 63488 00:12:16.235 } 00:12:16.235 ] 00:12:16.235 }' 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.235 [2024-12-07 01:56:21.546696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:16.235 [2024-12-07 01:56:21.546755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.235 [2024-12-07 01:56:21.546780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:16.235 [2024-12-07 01:56:21.546789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.235 [2024-12-07 01:56:21.547191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.235 [2024-12-07 01:56:21.547207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:16.235 [2024-12-07 01:56:21.547287] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:16.235 [2024-12-07 01:56:21.547301] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:16.235 [2024-12-07 01:56:21.547313] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:16.235 [2024-12-07 01:56:21.547322] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:16.235 BaseBdev1 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.235 01:56:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.173 "name": "raid_bdev1", 00:12:17.173 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:17.173 "strip_size_kb": 0, 00:12:17.173 "state": "online", 00:12:17.173 "raid_level": "raid1", 00:12:17.173 "superblock": true, 00:12:17.173 "num_base_bdevs": 4, 00:12:17.173 "num_base_bdevs_discovered": 2, 00:12:17.173 "num_base_bdevs_operational": 2, 00:12:17.173 "base_bdevs_list": [ 00:12:17.173 { 00:12:17.173 "name": null, 00:12:17.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.173 "is_configured": false, 00:12:17.173 "data_offset": 0, 00:12:17.173 "data_size": 63488 00:12:17.173 }, 00:12:17.173 { 00:12:17.173 "name": null, 00:12:17.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.173 "is_configured": false, 00:12:17.173 "data_offset": 2048, 00:12:17.173 "data_size": 63488 00:12:17.173 }, 00:12:17.173 { 00:12:17.173 "name": "BaseBdev3", 00:12:17.173 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:17.173 "is_configured": true, 00:12:17.173 "data_offset": 2048, 00:12:17.173 "data_size": 63488 00:12:17.173 }, 00:12:17.173 { 00:12:17.173 "name": "BaseBdev4", 00:12:17.173 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:17.173 "is_configured": true, 00:12:17.173 "data_offset": 2048, 00:12:17.173 "data_size": 63488 00:12:17.173 } 00:12:17.173 ] 00:12:17.173 }' 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.173 01:56:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.741 "name": "raid_bdev1", 00:12:17.741 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:17.741 "strip_size_kb": 0, 00:12:17.741 "state": "online", 00:12:17.741 "raid_level": "raid1", 00:12:17.741 "superblock": true, 00:12:17.741 "num_base_bdevs": 4, 00:12:17.741 "num_base_bdevs_discovered": 2, 00:12:17.741 "num_base_bdevs_operational": 2, 00:12:17.741 "base_bdevs_list": [ 00:12:17.741 { 00:12:17.741 "name": null, 00:12:17.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.741 "is_configured": false, 00:12:17.741 "data_offset": 0, 00:12:17.741 "data_size": 63488 00:12:17.741 }, 00:12:17.741 { 00:12:17.741 "name": null, 00:12:17.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.741 "is_configured": false, 00:12:17.741 "data_offset": 2048, 00:12:17.741 "data_size": 63488 00:12:17.741 }, 00:12:17.741 { 00:12:17.741 "name": "BaseBdev3", 00:12:17.741 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:17.741 "is_configured": true, 00:12:17.741 "data_offset": 2048, 00:12:17.741 "data_size": 63488 00:12:17.741 }, 00:12:17.741 { 00:12:17.741 "name": "BaseBdev4", 00:12:17.741 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:17.741 "is_configured": true, 00:12:17.741 "data_offset": 2048, 00:12:17.741 "data_size": 63488 00:12:17.741 } 00:12:17.741 ] 00:12:17.741 }' 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.741 [2024-12-07 01:56:23.183943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.741 [2024-12-07 01:56:23.184169] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:17.741 [2024-12-07 01:56:23.184188] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:17.741 request: 00:12:17.741 { 00:12:17.741 "base_bdev": "BaseBdev1", 00:12:17.741 "raid_bdev": "raid_bdev1", 00:12:17.741 "method": "bdev_raid_add_base_bdev", 00:12:17.741 "req_id": 1 00:12:17.741 } 00:12:17.741 Got JSON-RPC error response 00:12:17.741 response: 00:12:17.741 { 00:12:17.741 "code": -22, 00:12:17.741 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:17.741 } 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.741 01:56:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.118 "name": "raid_bdev1", 00:12:19.118 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:19.118 "strip_size_kb": 0, 00:12:19.118 "state": "online", 00:12:19.118 "raid_level": "raid1", 00:12:19.118 "superblock": true, 00:12:19.118 "num_base_bdevs": 4, 00:12:19.118 "num_base_bdevs_discovered": 2, 00:12:19.118 "num_base_bdevs_operational": 2, 00:12:19.118 "base_bdevs_list": [ 00:12:19.118 { 00:12:19.118 "name": null, 00:12:19.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.118 "is_configured": false, 00:12:19.118 "data_offset": 0, 00:12:19.118 "data_size": 63488 00:12:19.118 }, 00:12:19.118 { 00:12:19.118 "name": null, 00:12:19.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.118 "is_configured": false, 00:12:19.118 "data_offset": 2048, 00:12:19.118 "data_size": 63488 00:12:19.118 }, 00:12:19.118 { 00:12:19.118 "name": "BaseBdev3", 00:12:19.118 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:19.118 "is_configured": true, 00:12:19.118 "data_offset": 2048, 00:12:19.118 "data_size": 63488 00:12:19.118 }, 00:12:19.118 { 00:12:19.118 "name": "BaseBdev4", 00:12:19.118 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:19.118 "is_configured": true, 00:12:19.118 "data_offset": 2048, 00:12:19.118 "data_size": 63488 00:12:19.118 } 00:12:19.118 ] 00:12:19.118 }' 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.118 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.378 "name": "raid_bdev1", 00:12:19.378 "uuid": "f4092b06-6966-4983-ac79-4e83ef4b2fb4", 00:12:19.378 "strip_size_kb": 0, 00:12:19.378 "state": "online", 00:12:19.378 "raid_level": "raid1", 00:12:19.378 "superblock": true, 00:12:19.378 "num_base_bdevs": 4, 00:12:19.378 "num_base_bdevs_discovered": 2, 00:12:19.378 "num_base_bdevs_operational": 2, 00:12:19.378 "base_bdevs_list": [ 00:12:19.378 { 00:12:19.378 "name": null, 00:12:19.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.378 "is_configured": false, 00:12:19.378 "data_offset": 0, 00:12:19.378 "data_size": 63488 00:12:19.378 }, 00:12:19.378 { 00:12:19.378 "name": null, 00:12:19.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.378 "is_configured": false, 00:12:19.378 "data_offset": 2048, 00:12:19.378 "data_size": 63488 00:12:19.378 }, 00:12:19.378 { 00:12:19.378 "name": "BaseBdev3", 00:12:19.378 "uuid": "4f2efbad-7136-5d71-b3b3-e145cd105532", 00:12:19.378 "is_configured": true, 00:12:19.378 "data_offset": 2048, 00:12:19.378 "data_size": 63488 00:12:19.378 }, 00:12:19.378 { 00:12:19.378 "name": "BaseBdev4", 00:12:19.378 "uuid": "1bc138e9-2f44-5c4e-9c6e-cb2481bb1159", 00:12:19.378 "is_configured": true, 00:12:19.378 "data_offset": 2048, 00:12:19.378 "data_size": 63488 00:12:19.378 } 00:12:19.378 ] 00:12:19.378 }' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88318 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 88318 ']' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 88318 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88318 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88318' 00:12:19.378 killing process with pid 88318 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 88318 00:12:19.378 Received shutdown signal, test time was about 60.000000 seconds 00:12:19.378 00:12:19.378 Latency(us) 00:12:19.378 [2024-12-07T01:56:24.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.378 [2024-12-07T01:56:24.840Z] =================================================================================================================== 00:12:19.378 [2024-12-07T01:56:24.840Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:19.378 [2024-12-07 01:56:24.758045] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.378 01:56:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 88318 00:12:19.378 [2024-12-07 01:56:24.758199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.378 [2024-12-07 01:56:24.758291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.378 [2024-12-07 01:56:24.758306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:19.378 [2024-12-07 01:56:24.809471] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.637 01:56:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:19.637 00:12:19.637 real 0m22.958s 00:12:19.637 user 0m28.247s 00:12:19.637 sys 0m3.538s 00:12:19.637 ************************************ 00:12:19.637 END TEST raid_rebuild_test_sb 00:12:19.637 ************************************ 00:12:19.637 01:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.637 01:56:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.896 01:56:25 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:19.896 01:56:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:19.896 01:56:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.896 01:56:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.896 ************************************ 00:12:19.896 START TEST raid_rebuild_test_io 00:12:19.896 ************************************ 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89053 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89053 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 89053 ']' 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:19.896 01:56:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.896 [2024-12-07 01:56:25.224288] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:19.896 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:19.896 Zero copy mechanism will not be used. 00:12:19.897 [2024-12-07 01:56:25.224502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89053 ] 00:12:20.154 [2024-12-07 01:56:25.372460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.154 [2024-12-07 01:56:25.418313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.154 [2024-12-07 01:56:25.459911] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.154 [2024-12-07 01:56:25.460027] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.719 BaseBdev1_malloc 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.719 [2024-12-07 01:56:26.093964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:20.719 [2024-12-07 01:56:26.094023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.719 [2024-12-07 01:56:26.094049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:20.719 [2024-12-07 01:56:26.094069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.719 [2024-12-07 01:56:26.096407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.719 [2024-12-07 01:56:26.096444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.719 BaseBdev1 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.719 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.720 BaseBdev2_malloc 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.720 [2024-12-07 01:56:26.133579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:20.720 [2024-12-07 01:56:26.133726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.720 [2024-12-07 01:56:26.133764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.720 [2024-12-07 01:56:26.133778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.720 [2024-12-07 01:56:26.136762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.720 [2024-12-07 01:56:26.136835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.720 BaseBdev2 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.720 BaseBdev3_malloc 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.720 [2024-12-07 01:56:26.162358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:20.720 [2024-12-07 01:56:26.162413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.720 [2024-12-07 01:56:26.162440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.720 [2024-12-07 01:56:26.162449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.720 [2024-12-07 01:56:26.164736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.720 [2024-12-07 01:56:26.164765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:20.720 BaseBdev3 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.720 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.979 BaseBdev4_malloc 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.979 [2024-12-07 01:56:26.191307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:20.979 [2024-12-07 01:56:26.191354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.979 [2024-12-07 01:56:26.191374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:20.979 [2024-12-07 01:56:26.191383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.979 [2024-12-07 01:56:26.193454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.979 [2024-12-07 01:56:26.193485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:20.979 BaseBdev4 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.979 spare_malloc 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.979 spare_delay 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.979 [2024-12-07 01:56:26.231884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:20.979 [2024-12-07 01:56:26.231965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.979 [2024-12-07 01:56:26.232002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:20.979 [2024-12-07 01:56:26.232011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.979 [2024-12-07 01:56:26.234024] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.979 [2024-12-07 01:56:26.234059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:20.979 spare 00:12:20.979 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.980 [2024-12-07 01:56:26.243926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.980 [2024-12-07 01:56:26.245711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.980 [2024-12-07 01:56:26.245817] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.980 [2024-12-07 01:56:26.245871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.980 [2024-12-07 01:56:26.245943] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:20.980 [2024-12-07 01:56:26.245951] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:20.980 [2024-12-07 01:56:26.246217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:20.980 [2024-12-07 01:56:26.246325] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:20.980 [2024-12-07 01:56:26.246335] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:20.980 [2024-12-07 01:56:26.246448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.980 "name": "raid_bdev1", 00:12:20.980 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:20.980 "strip_size_kb": 0, 00:12:20.980 "state": "online", 00:12:20.980 "raid_level": "raid1", 00:12:20.980 "superblock": false, 00:12:20.980 "num_base_bdevs": 4, 00:12:20.980 "num_base_bdevs_discovered": 4, 00:12:20.980 "num_base_bdevs_operational": 4, 00:12:20.980 "base_bdevs_list": [ 00:12:20.980 { 00:12:20.980 "name": "BaseBdev1", 00:12:20.980 "uuid": "b3b8efbc-55a5-5e0a-a27d-4887931d6135", 00:12:20.980 "is_configured": true, 00:12:20.980 "data_offset": 0, 00:12:20.980 "data_size": 65536 00:12:20.980 }, 00:12:20.980 { 00:12:20.980 "name": "BaseBdev2", 00:12:20.980 "uuid": "9f1bfc8a-de2b-59ed-ac76-aa74cf5a003d", 00:12:20.980 "is_configured": true, 00:12:20.980 "data_offset": 0, 00:12:20.980 "data_size": 65536 00:12:20.980 }, 00:12:20.980 { 00:12:20.980 "name": "BaseBdev3", 00:12:20.980 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:20.980 "is_configured": true, 00:12:20.980 "data_offset": 0, 00:12:20.980 "data_size": 65536 00:12:20.980 }, 00:12:20.980 { 00:12:20.980 "name": "BaseBdev4", 00:12:20.980 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:20.980 "is_configured": true, 00:12:20.980 "data_offset": 0, 00:12:20.980 "data_size": 65536 00:12:20.980 } 00:12:20.980 ] 00:12:20.980 }' 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.980 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.239 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.239 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.239 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.239 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.239 [2024-12-07 01:56:26.695516] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.498 [2024-12-07 01:56:26.763007] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.498 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.499 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.499 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.499 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.499 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.499 "name": "raid_bdev1", 00:12:21.499 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:21.499 "strip_size_kb": 0, 00:12:21.499 "state": "online", 00:12:21.499 "raid_level": "raid1", 00:12:21.499 "superblock": false, 00:12:21.499 "num_base_bdevs": 4, 00:12:21.499 "num_base_bdevs_discovered": 3, 00:12:21.499 "num_base_bdevs_operational": 3, 00:12:21.499 "base_bdevs_list": [ 00:12:21.499 { 00:12:21.499 "name": null, 00:12:21.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.499 "is_configured": false, 00:12:21.499 "data_offset": 0, 00:12:21.499 "data_size": 65536 00:12:21.499 }, 00:12:21.499 { 00:12:21.499 "name": "BaseBdev2", 00:12:21.499 "uuid": "9f1bfc8a-de2b-59ed-ac76-aa74cf5a003d", 00:12:21.499 "is_configured": true, 00:12:21.499 "data_offset": 0, 00:12:21.499 "data_size": 65536 00:12:21.499 }, 00:12:21.499 { 00:12:21.499 "name": "BaseBdev3", 00:12:21.499 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:21.499 "is_configured": true, 00:12:21.499 "data_offset": 0, 00:12:21.499 "data_size": 65536 00:12:21.499 }, 00:12:21.499 { 00:12:21.499 "name": "BaseBdev4", 00:12:21.499 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:21.499 "is_configured": true, 00:12:21.499 "data_offset": 0, 00:12:21.499 "data_size": 65536 00:12:21.499 } 00:12:21.499 ] 00:12:21.499 }' 00:12:21.499 01:56:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.499 01:56:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.499 [2024-12-07 01:56:26.848834] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:21.499 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.499 Zero copy mechanism will not be used. 00:12:21.499 Running I/O for 60 seconds... 00:12:21.758 01:56:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.758 01:56:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.758 01:56:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.758 [2024-12-07 01:56:27.206861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.018 01:56:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.018 01:56:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:22.018 [2024-12-07 01:56:27.241790] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:22.018 [2024-12-07 01:56:27.243750] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.018 [2024-12-07 01:56:27.358515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.018 [2024-12-07 01:56:27.359012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.277 [2024-12-07 01:56:27.487733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.277 [2024-12-07 01:56:27.488429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.537 [2024-12-07 01:56:27.828109] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.537 [2024-12-07 01:56:27.828486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.796 175.00 IOPS, 525.00 MiB/s [2024-12-07T01:56:28.258Z] [2024-12-07 01:56:28.038354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.796 [2024-12-07 01:56:28.038644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.796 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.056 "name": "raid_bdev1", 00:12:23.056 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:23.056 "strip_size_kb": 0, 00:12:23.056 "state": "online", 00:12:23.056 "raid_level": "raid1", 00:12:23.056 "superblock": false, 00:12:23.056 "num_base_bdevs": 4, 00:12:23.056 "num_base_bdevs_discovered": 4, 00:12:23.056 "num_base_bdevs_operational": 4, 00:12:23.056 "process": { 00:12:23.056 "type": "rebuild", 00:12:23.056 "target": "spare", 00:12:23.056 "progress": { 00:12:23.056 "blocks": 12288, 00:12:23.056 "percent": 18 00:12:23.056 } 00:12:23.056 }, 00:12:23.056 "base_bdevs_list": [ 00:12:23.056 { 00:12:23.056 "name": "spare", 00:12:23.056 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:23.056 "is_configured": true, 00:12:23.056 "data_offset": 0, 00:12:23.056 "data_size": 65536 00:12:23.056 }, 00:12:23.056 { 00:12:23.056 "name": "BaseBdev2", 00:12:23.056 "uuid": "9f1bfc8a-de2b-59ed-ac76-aa74cf5a003d", 00:12:23.056 "is_configured": true, 00:12:23.056 "data_offset": 0, 00:12:23.056 "data_size": 65536 00:12:23.056 }, 00:12:23.056 { 00:12:23.056 "name": "BaseBdev3", 00:12:23.056 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:23.056 "is_configured": true, 00:12:23.056 "data_offset": 0, 00:12:23.056 "data_size": 65536 00:12:23.056 }, 00:12:23.056 { 00:12:23.056 "name": "BaseBdev4", 00:12:23.056 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:23.056 "is_configured": true, 00:12:23.056 "data_offset": 0, 00:12:23.056 "data_size": 65536 00:12:23.056 } 00:12:23.056 ] 00:12:23.056 }' 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.056 [2024-12-07 01:56:28.304780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.056 [2024-12-07 01:56:28.380307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.056 [2024-12-07 01:56:28.461294] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.056 [2024-12-07 01:56:28.477669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.056 [2024-12-07 01:56:28.477715] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.056 [2024-12-07 01:56:28.477730] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.056 [2024-12-07 01:56:28.495397] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.056 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.316 "name": "raid_bdev1", 00:12:23.316 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:23.316 "strip_size_kb": 0, 00:12:23.316 "state": "online", 00:12:23.316 "raid_level": "raid1", 00:12:23.316 "superblock": false, 00:12:23.316 "num_base_bdevs": 4, 00:12:23.316 "num_base_bdevs_discovered": 3, 00:12:23.316 "num_base_bdevs_operational": 3, 00:12:23.316 "base_bdevs_list": [ 00:12:23.316 { 00:12:23.316 "name": null, 00:12:23.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.316 "is_configured": false, 00:12:23.316 "data_offset": 0, 00:12:23.316 "data_size": 65536 00:12:23.316 }, 00:12:23.316 { 00:12:23.316 "name": "BaseBdev2", 00:12:23.316 "uuid": "9f1bfc8a-de2b-59ed-ac76-aa74cf5a003d", 00:12:23.316 "is_configured": true, 00:12:23.316 "data_offset": 0, 00:12:23.316 "data_size": 65536 00:12:23.316 }, 00:12:23.316 { 00:12:23.316 "name": "BaseBdev3", 00:12:23.316 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:23.316 "is_configured": true, 00:12:23.316 "data_offset": 0, 00:12:23.316 "data_size": 65536 00:12:23.316 }, 00:12:23.316 { 00:12:23.316 "name": "BaseBdev4", 00:12:23.316 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:23.316 "is_configured": true, 00:12:23.316 "data_offset": 0, 00:12:23.316 "data_size": 65536 00:12:23.316 } 00:12:23.316 ] 00:12:23.316 }' 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.316 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 159.00 IOPS, 477.00 MiB/s [2024-12-07T01:56:29.037Z] 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.575 "name": "raid_bdev1", 00:12:23.575 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:23.575 "strip_size_kb": 0, 00:12:23.575 "state": "online", 00:12:23.575 "raid_level": "raid1", 00:12:23.575 "superblock": false, 00:12:23.575 "num_base_bdevs": 4, 00:12:23.575 "num_base_bdevs_discovered": 3, 00:12:23.575 "num_base_bdevs_operational": 3, 00:12:23.575 "base_bdevs_list": [ 00:12:23.575 { 00:12:23.575 "name": null, 00:12:23.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.575 "is_configured": false, 00:12:23.575 "data_offset": 0, 00:12:23.575 "data_size": 65536 00:12:23.575 }, 00:12:23.575 { 00:12:23.575 "name": "BaseBdev2", 00:12:23.575 "uuid": "9f1bfc8a-de2b-59ed-ac76-aa74cf5a003d", 00:12:23.575 "is_configured": true, 00:12:23.575 "data_offset": 0, 00:12:23.575 "data_size": 65536 00:12:23.575 }, 00:12:23.575 { 00:12:23.575 "name": "BaseBdev3", 00:12:23.575 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:23.575 "is_configured": true, 00:12:23.575 "data_offset": 0, 00:12:23.575 "data_size": 65536 00:12:23.575 }, 00:12:23.575 { 00:12:23.575 "name": "BaseBdev4", 00:12:23.575 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:23.575 "is_configured": true, 00:12:23.575 "data_offset": 0, 00:12:23.575 "data_size": 65536 00:12:23.575 } 00:12:23.575 ] 00:12:23.575 }' 00:12:23.575 01:56:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.575 01:56:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.575 01:56:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.835 01:56:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.835 01:56:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.835 01:56:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.835 01:56:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.835 [2024-12-07 01:56:29.058883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.835 01:56:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.835 01:56:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:23.835 [2024-12-07 01:56:29.112592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:23.835 [2024-12-07 01:56:29.114594] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:23.835 [2024-12-07 01:56:29.228726] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:23.835 [2024-12-07 01:56:29.229351] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:24.095 [2024-12-07 01:56:29.339331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.095 [2024-12-07 01:56:29.339626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.354 [2024-12-07 01:56:29.674402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:24.354 [2024-12-07 01:56:29.675730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:24.613 158.00 IOPS, 474.00 MiB/s [2024-12-07T01:56:30.075Z] [2024-12-07 01:56:29.877211] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.613 [2024-12-07 01:56:29.877837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.873 "name": "raid_bdev1", 00:12:24.873 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:24.873 "strip_size_kb": 0, 00:12:24.873 "state": "online", 00:12:24.873 "raid_level": "raid1", 00:12:24.873 "superblock": false, 00:12:24.873 "num_base_bdevs": 4, 00:12:24.873 "num_base_bdevs_discovered": 4, 00:12:24.873 "num_base_bdevs_operational": 4, 00:12:24.873 "process": { 00:12:24.873 "type": "rebuild", 00:12:24.873 "target": "spare", 00:12:24.873 "progress": { 00:12:24.873 "blocks": 12288, 00:12:24.873 "percent": 18 00:12:24.873 } 00:12:24.873 }, 00:12:24.873 "base_bdevs_list": [ 00:12:24.873 { 00:12:24.873 "name": "spare", 00:12:24.873 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:24.873 "is_configured": true, 00:12:24.873 "data_offset": 0, 00:12:24.873 "data_size": 65536 00:12:24.873 }, 00:12:24.873 { 00:12:24.873 "name": "BaseBdev2", 00:12:24.873 "uuid": "9f1bfc8a-de2b-59ed-ac76-aa74cf5a003d", 00:12:24.873 "is_configured": true, 00:12:24.873 "data_offset": 0, 00:12:24.873 "data_size": 65536 00:12:24.873 }, 00:12:24.873 { 00:12:24.873 "name": "BaseBdev3", 00:12:24.873 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:24.873 "is_configured": true, 00:12:24.873 "data_offset": 0, 00:12:24.873 "data_size": 65536 00:12:24.873 }, 00:12:24.873 { 00:12:24.873 "name": "BaseBdev4", 00:12:24.873 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:24.873 "is_configured": true, 00:12:24.873 "data_offset": 0, 00:12:24.873 "data_size": 65536 00:12:24.873 } 00:12:24.873 ] 00:12:24.873 }' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.873 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.873 [2024-12-07 01:56:30.241259] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:25.133 [2024-12-07 01:56:30.362739] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:25.133 [2024-12-07 01:56:30.362773] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.134 "name": "raid_bdev1", 00:12:25.134 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:25.134 "strip_size_kb": 0, 00:12:25.134 "state": "online", 00:12:25.134 "raid_level": "raid1", 00:12:25.134 "superblock": false, 00:12:25.134 "num_base_bdevs": 4, 00:12:25.134 "num_base_bdevs_discovered": 3, 00:12:25.134 "num_base_bdevs_operational": 3, 00:12:25.134 "process": { 00:12:25.134 "type": "rebuild", 00:12:25.134 "target": "spare", 00:12:25.134 "progress": { 00:12:25.134 "blocks": 16384, 00:12:25.134 "percent": 25 00:12:25.134 } 00:12:25.134 }, 00:12:25.134 "base_bdevs_list": [ 00:12:25.134 { 00:12:25.134 "name": "spare", 00:12:25.134 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:25.134 "is_configured": true, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 }, 00:12:25.134 { 00:12:25.134 "name": null, 00:12:25.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.134 "is_configured": false, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 }, 00:12:25.134 { 00:12:25.134 "name": "BaseBdev3", 00:12:25.134 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:25.134 "is_configured": true, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 }, 00:12:25.134 { 00:12:25.134 "name": "BaseBdev4", 00:12:25.134 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:25.134 "is_configured": true, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 } 00:12:25.134 ] 00:12:25.134 }' 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=388 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.134 "name": "raid_bdev1", 00:12:25.134 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:25.134 "strip_size_kb": 0, 00:12:25.134 "state": "online", 00:12:25.134 "raid_level": "raid1", 00:12:25.134 "superblock": false, 00:12:25.134 "num_base_bdevs": 4, 00:12:25.134 "num_base_bdevs_discovered": 3, 00:12:25.134 "num_base_bdevs_operational": 3, 00:12:25.134 "process": { 00:12:25.134 "type": "rebuild", 00:12:25.134 "target": "spare", 00:12:25.134 "progress": { 00:12:25.134 "blocks": 18432, 00:12:25.134 "percent": 28 00:12:25.134 } 00:12:25.134 }, 00:12:25.134 "base_bdevs_list": [ 00:12:25.134 { 00:12:25.134 "name": "spare", 00:12:25.134 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:25.134 "is_configured": true, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 }, 00:12:25.134 { 00:12:25.134 "name": null, 00:12:25.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.134 "is_configured": false, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 }, 00:12:25.134 { 00:12:25.134 "name": "BaseBdev3", 00:12:25.134 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:25.134 "is_configured": true, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 }, 00:12:25.134 { 00:12:25.134 "name": "BaseBdev4", 00:12:25.134 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:25.134 "is_configured": true, 00:12:25.134 "data_offset": 0, 00:12:25.134 "data_size": 65536 00:12:25.134 } 00:12:25.134 ] 00:12:25.134 }' 00:12:25.134 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.134 [2024-12-07 01:56:30.593221] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:25.394 [2024-12-07 01:56:30.593742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:25.395 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.395 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.395 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.395 01:56:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.395 [2024-12-07 01:56:30.803091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:25.395 [2024-12-07 01:56:30.803390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:25.961 139.25 IOPS, 417.75 MiB/s [2024-12-07T01:56:31.423Z] [2024-12-07 01:56:31.121044] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:25.961 [2024-12-07 01:56:31.324042] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:26.220 [2024-12-07 01:56:31.657391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.220 01:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.479 "name": "raid_bdev1", 00:12:26.479 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:26.479 "strip_size_kb": 0, 00:12:26.479 "state": "online", 00:12:26.479 "raid_level": "raid1", 00:12:26.479 "superblock": false, 00:12:26.479 "num_base_bdevs": 4, 00:12:26.479 "num_base_bdevs_discovered": 3, 00:12:26.479 "num_base_bdevs_operational": 3, 00:12:26.479 "process": { 00:12:26.479 "type": "rebuild", 00:12:26.479 "target": "spare", 00:12:26.479 "progress": { 00:12:26.479 "blocks": 34816, 00:12:26.479 "percent": 53 00:12:26.479 } 00:12:26.479 }, 00:12:26.479 "base_bdevs_list": [ 00:12:26.479 { 00:12:26.479 "name": "spare", 00:12:26.479 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:26.479 "is_configured": true, 00:12:26.479 "data_offset": 0, 00:12:26.479 "data_size": 65536 00:12:26.479 }, 00:12:26.479 { 00:12:26.479 "name": null, 00:12:26.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.479 "is_configured": false, 00:12:26.479 "data_offset": 0, 00:12:26.479 "data_size": 65536 00:12:26.479 }, 00:12:26.479 { 00:12:26.479 "name": "BaseBdev3", 00:12:26.479 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:26.479 "is_configured": true, 00:12:26.479 "data_offset": 0, 00:12:26.479 "data_size": 65536 00:12:26.479 }, 00:12:26.479 { 00:12:26.479 "name": "BaseBdev4", 00:12:26.479 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:26.479 "is_configured": true, 00:12:26.479 "data_offset": 0, 00:12:26.479 "data_size": 65536 00:12:26.479 } 00:12:26.479 ] 00:12:26.479 }' 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.479 01:56:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.738 122.00 IOPS, 366.00 MiB/s [2024-12-07T01:56:32.200Z] [2024-12-07 01:56:32.100546] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:27.305 [2024-12-07 01:56:32.568269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.565 109.50 IOPS, 328.50 MiB/s [2024-12-07T01:56:33.027Z] 01:56:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.565 "name": "raid_bdev1", 00:12:27.565 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:27.565 "strip_size_kb": 0, 00:12:27.565 "state": "online", 00:12:27.565 "raid_level": "raid1", 00:12:27.565 "superblock": false, 00:12:27.565 "num_base_bdevs": 4, 00:12:27.565 "num_base_bdevs_discovered": 3, 00:12:27.565 "num_base_bdevs_operational": 3, 00:12:27.565 "process": { 00:12:27.565 "type": "rebuild", 00:12:27.565 "target": "spare", 00:12:27.565 "progress": { 00:12:27.565 "blocks": 49152, 00:12:27.565 "percent": 75 00:12:27.565 } 00:12:27.565 }, 00:12:27.565 "base_bdevs_list": [ 00:12:27.565 { 00:12:27.565 "name": "spare", 00:12:27.565 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:27.565 "is_configured": true, 00:12:27.565 "data_offset": 0, 00:12:27.565 "data_size": 65536 00:12:27.565 }, 00:12:27.565 { 00:12:27.565 "name": null, 00:12:27.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.565 "is_configured": false, 00:12:27.565 "data_offset": 0, 00:12:27.565 "data_size": 65536 00:12:27.565 }, 00:12:27.565 { 00:12:27.565 "name": "BaseBdev3", 00:12:27.565 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:27.565 "is_configured": true, 00:12:27.565 "data_offset": 0, 00:12:27.565 "data_size": 65536 00:12:27.565 }, 00:12:27.565 { 00:12:27.565 "name": "BaseBdev4", 00:12:27.565 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:27.565 "is_configured": true, 00:12:27.565 "data_offset": 0, 00:12:27.565 "data_size": 65536 00:12:27.565 } 00:12:27.565 ] 00:12:27.565 }' 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.565 [2024-12-07 01:56:32.888144] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.565 01:56:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.824 [2024-12-07 01:56:33.104636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:28.083 [2024-12-07 01:56:33.327424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:28.083 [2024-12-07 01:56:33.434247] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:28.341 [2024-12-07 01:56:33.754739] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:28.600 99.14 IOPS, 297.43 MiB/s [2024-12-07T01:56:34.062Z] [2024-12-07 01:56:33.859707] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:28.600 [2024-12-07 01:56:33.863269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.600 "name": "raid_bdev1", 00:12:28.600 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:28.600 "strip_size_kb": 0, 00:12:28.600 "state": "online", 00:12:28.600 "raid_level": "raid1", 00:12:28.600 "superblock": false, 00:12:28.600 "num_base_bdevs": 4, 00:12:28.600 "num_base_bdevs_discovered": 3, 00:12:28.600 "num_base_bdevs_operational": 3, 00:12:28.600 "base_bdevs_list": [ 00:12:28.600 { 00:12:28.600 "name": "spare", 00:12:28.600 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:28.600 "is_configured": true, 00:12:28.600 "data_offset": 0, 00:12:28.600 "data_size": 65536 00:12:28.600 }, 00:12:28.600 { 00:12:28.600 "name": null, 00:12:28.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.600 "is_configured": false, 00:12:28.600 "data_offset": 0, 00:12:28.600 "data_size": 65536 00:12:28.600 }, 00:12:28.600 { 00:12:28.600 "name": "BaseBdev3", 00:12:28.600 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:28.600 "is_configured": true, 00:12:28.600 "data_offset": 0, 00:12:28.600 "data_size": 65536 00:12:28.600 }, 00:12:28.600 { 00:12:28.600 "name": "BaseBdev4", 00:12:28.600 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:28.600 "is_configured": true, 00:12:28.600 "data_offset": 0, 00:12:28.600 "data_size": 65536 00:12:28.600 } 00:12:28.600 ] 00:12:28.600 }' 00:12:28.600 01:56:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.600 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.600 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.858 "name": "raid_bdev1", 00:12:28.858 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:28.858 "strip_size_kb": 0, 00:12:28.858 "state": "online", 00:12:28.858 "raid_level": "raid1", 00:12:28.858 "superblock": false, 00:12:28.858 "num_base_bdevs": 4, 00:12:28.858 "num_base_bdevs_discovered": 3, 00:12:28.858 "num_base_bdevs_operational": 3, 00:12:28.858 "base_bdevs_list": [ 00:12:28.858 { 00:12:28.858 "name": "spare", 00:12:28.858 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:28.858 "is_configured": true, 00:12:28.858 "data_offset": 0, 00:12:28.858 "data_size": 65536 00:12:28.858 }, 00:12:28.858 { 00:12:28.858 "name": null, 00:12:28.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.858 "is_configured": false, 00:12:28.858 "data_offset": 0, 00:12:28.858 "data_size": 65536 00:12:28.858 }, 00:12:28.858 { 00:12:28.858 "name": "BaseBdev3", 00:12:28.858 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:28.858 "is_configured": true, 00:12:28.858 "data_offset": 0, 00:12:28.858 "data_size": 65536 00:12:28.858 }, 00:12:28.858 { 00:12:28.858 "name": "BaseBdev4", 00:12:28.858 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:28.858 "is_configured": true, 00:12:28.858 "data_offset": 0, 00:12:28.858 "data_size": 65536 00:12:28.858 } 00:12:28.858 ] 00:12:28.858 }' 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.858 "name": "raid_bdev1", 00:12:28.858 "uuid": "5351305f-2ab3-4c57-ae9f-f4b4bc50d03e", 00:12:28.859 "strip_size_kb": 0, 00:12:28.859 "state": "online", 00:12:28.859 "raid_level": "raid1", 00:12:28.859 "superblock": false, 00:12:28.859 "num_base_bdevs": 4, 00:12:28.859 "num_base_bdevs_discovered": 3, 00:12:28.859 "num_base_bdevs_operational": 3, 00:12:28.859 "base_bdevs_list": [ 00:12:28.859 { 00:12:28.859 "name": "spare", 00:12:28.859 "uuid": "27a8c196-117f-5a29-af49-050486933fd2", 00:12:28.859 "is_configured": true, 00:12:28.859 "data_offset": 0, 00:12:28.859 "data_size": 65536 00:12:28.859 }, 00:12:28.859 { 00:12:28.859 "name": null, 00:12:28.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.859 "is_configured": false, 00:12:28.859 "data_offset": 0, 00:12:28.859 "data_size": 65536 00:12:28.859 }, 00:12:28.859 { 00:12:28.859 "name": "BaseBdev3", 00:12:28.859 "uuid": "0c316af9-53d9-5295-885b-7eef02229393", 00:12:28.859 "is_configured": true, 00:12:28.859 "data_offset": 0, 00:12:28.859 "data_size": 65536 00:12:28.859 }, 00:12:28.859 { 00:12:28.859 "name": "BaseBdev4", 00:12:28.859 "uuid": "e9330009-2923-562c-9638-ab1a7511e263", 00:12:28.859 "is_configured": true, 00:12:28.859 "data_offset": 0, 00:12:28.859 "data_size": 65536 00:12:28.859 } 00:12:28.859 ] 00:12:28.859 }' 00:12:28.859 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.859 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.427 [2024-12-07 01:56:34.604327] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.427 [2024-12-07 01:56:34.604409] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.427 00:12:29.427 Latency(us) 00:12:29.427 [2024-12-07T01:56:34.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.427 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:29.427 raid_bdev1 : 7.78 92.41 277.24 0.00 0.00 13748.54 295.13 114015.47 00:12:29.427 [2024-12-07T01:56:34.889Z] =================================================================================================================== 00:12:29.427 [2024-12-07T01:56:34.889Z] Total : 92.41 277.24 0.00 0.00 13748.54 295.13 114015.47 00:12:29.427 [2024-12-07 01:56:34.620047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.427 [2024-12-07 01:56:34.620121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.427 [2024-12-07 01:56:34.620251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.427 [2024-12-07 01:56:34.620301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:29.427 { 00:12:29.427 "results": [ 00:12:29.427 { 00:12:29.427 "job": "raid_bdev1", 00:12:29.427 "core_mask": "0x1", 00:12:29.427 "workload": "randrw", 00:12:29.427 "percentage": 50, 00:12:29.427 "status": "finished", 00:12:29.427 "queue_depth": 2, 00:12:29.427 "io_size": 3145728, 00:12:29.427 "runtime": 7.780275, 00:12:29.427 "iops": 92.41318590923842, 00:12:29.427 "mibps": 277.23955772771524, 00:12:29.427 "io_failed": 0, 00:12:29.427 "io_timeout": 0, 00:12:29.427 "avg_latency_us": 13748.537908667422, 00:12:29.427 "min_latency_us": 295.12663755458516, 00:12:29.427 "max_latency_us": 114015.46899563319 00:12:29.427 } 00:12:29.427 ], 00:12:29.427 "core_count": 1 00:12:29.427 } 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.427 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:29.427 /dev/nbd0 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.687 1+0 records in 00:12:29.687 1+0 records out 00:12:29.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496348 s, 8.3 MB/s 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.687 01:56:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:29.945 /dev/nbd1 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.945 1+0 records in 00:12:29.945 1+0 records out 00:12:29.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494059 s, 8.3 MB/s 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.945 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.203 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:30.461 /dev/nbd1 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.461 1+0 records in 00:12:30.461 1+0 records out 00:12:30.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371331 s, 11.0 MB/s 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.461 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.720 01:56:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89053 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 89053 ']' 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 89053 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89053 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89053' 00:12:30.979 killing process with pid 89053 00:12:30.979 Received shutdown signal, test time was about 9.420288 seconds 00:12:30.979 00:12:30.979 Latency(us) 00:12:30.979 [2024-12-07T01:56:36.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.979 [2024-12-07T01:56:36.441Z] =================================================================================================================== 00:12:30.979 [2024-12-07T01:56:36.441Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 89053 00:12:30.979 [2024-12-07 01:56:36.253078] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.979 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 89053 00:12:30.979 [2024-12-07 01:56:36.299954] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.238 ************************************ 00:12:31.238 END TEST raid_rebuild_test_io 00:12:31.238 ************************************ 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:31.238 00:12:31.238 real 0m11.414s 00:12:31.238 user 0m14.804s 00:12:31.238 sys 0m1.738s 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.238 01:56:36 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:31.238 01:56:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:12:31.238 01:56:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.238 01:56:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.238 ************************************ 00:12:31.238 START TEST raid_rebuild_test_sb_io 00:12:31.238 ************************************ 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.238 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89445 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89445 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 89445 ']' 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.239 01:56:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.497 [2024-12-07 01:56:36.701355] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:31.497 [2024-12-07 01:56:36.701572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89445 ] 00:12:31.497 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:31.497 Zero copy mechanism will not be used. 00:12:31.497 [2024-12-07 01:56:36.846837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.497 [2024-12-07 01:56:36.890884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.497 [2024-12-07 01:56:36.933614] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.497 [2024-12-07 01:56:36.933773] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 BaseBdev1_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 [2024-12-07 01:56:37.555992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.456 [2024-12-07 01:56:37.556050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.456 [2024-12-07 01:56:37.556085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:32.456 [2024-12-07 01:56:37.556105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.456 [2024-12-07 01:56:37.558252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.456 [2024-12-07 01:56:37.558294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.456 BaseBdev1 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 BaseBdev2_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 [2024-12-07 01:56:37.592059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.456 [2024-12-07 01:56:37.592165] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.456 [2024-12-07 01:56:37.592196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.456 [2024-12-07 01:56:37.592207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.456 [2024-12-07 01:56:37.594725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.456 [2024-12-07 01:56:37.594762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.456 BaseBdev2 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 BaseBdev3_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 [2024-12-07 01:56:37.620422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:32.456 [2024-12-07 01:56:37.620475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.456 [2024-12-07 01:56:37.620528] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.456 [2024-12-07 01:56:37.620536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.456 [2024-12-07 01:56:37.622552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.456 [2024-12-07 01:56:37.622621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:32.456 BaseBdev3 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 BaseBdev4_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 [2024-12-07 01:56:37.648926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:32.456 [2024-12-07 01:56:37.648972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.456 [2024-12-07 01:56:37.648992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:32.456 [2024-12-07 01:56:37.648999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.456 [2024-12-07 01:56:37.650995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.456 [2024-12-07 01:56:37.651029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:32.456 BaseBdev4 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 spare_malloc 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.456 spare_delay 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.456 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.457 [2024-12-07 01:56:37.689215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.457 [2024-12-07 01:56:37.689263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.457 [2024-12-07 01:56:37.689282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:32.457 [2024-12-07 01:56:37.689290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.457 [2024-12-07 01:56:37.691285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.457 [2024-12-07 01:56:37.691368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.457 spare 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.457 [2024-12-07 01:56:37.701279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.457 [2024-12-07 01:56:37.703029] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.457 [2024-12-07 01:56:37.703099] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:32.457 [2024-12-07 01:56:37.703148] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:32.457 [2024-12-07 01:56:37.703298] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:32.457 [2024-12-07 01:56:37.703313] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.457 [2024-12-07 01:56:37.703552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:32.457 [2024-12-07 01:56:37.703685] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:32.457 [2024-12-07 01:56:37.703698] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:32.457 [2024-12-07 01:56:37.703827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.457 "name": "raid_bdev1", 00:12:32.457 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:32.457 "strip_size_kb": 0, 00:12:32.457 "state": "online", 00:12:32.457 "raid_level": "raid1", 00:12:32.457 "superblock": true, 00:12:32.457 "num_base_bdevs": 4, 00:12:32.457 "num_base_bdevs_discovered": 4, 00:12:32.457 "num_base_bdevs_operational": 4, 00:12:32.457 "base_bdevs_list": [ 00:12:32.457 { 00:12:32.457 "name": "BaseBdev1", 00:12:32.457 "uuid": "7f223c31-6292-5ab4-bf05-53b4b1545213", 00:12:32.457 "is_configured": true, 00:12:32.457 "data_offset": 2048, 00:12:32.457 "data_size": 63488 00:12:32.457 }, 00:12:32.457 { 00:12:32.457 "name": "BaseBdev2", 00:12:32.457 "uuid": "a492232f-b487-54fe-903d-56b1f3a929f5", 00:12:32.457 "is_configured": true, 00:12:32.457 "data_offset": 2048, 00:12:32.457 "data_size": 63488 00:12:32.457 }, 00:12:32.457 { 00:12:32.457 "name": "BaseBdev3", 00:12:32.457 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:32.457 "is_configured": true, 00:12:32.457 "data_offset": 2048, 00:12:32.457 "data_size": 63488 00:12:32.457 }, 00:12:32.457 { 00:12:32.457 "name": "BaseBdev4", 00:12:32.457 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:32.457 "is_configured": true, 00:12:32.457 "data_offset": 2048, 00:12:32.457 "data_size": 63488 00:12:32.457 } 00:12:32.457 ] 00:12:32.457 }' 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.457 01:56:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.716 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:32.716 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:32.716 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.716 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.716 [2024-12-07 01:56:38.152844] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 [2024-12-07 01:56:38.248296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.975 "name": "raid_bdev1", 00:12:32.975 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:32.975 "strip_size_kb": 0, 00:12:32.975 "state": "online", 00:12:32.975 "raid_level": "raid1", 00:12:32.975 "superblock": true, 00:12:32.975 "num_base_bdevs": 4, 00:12:32.975 "num_base_bdevs_discovered": 3, 00:12:32.975 "num_base_bdevs_operational": 3, 00:12:32.975 "base_bdevs_list": [ 00:12:32.975 { 00:12:32.975 "name": null, 00:12:32.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.975 "is_configured": false, 00:12:32.975 "data_offset": 0, 00:12:32.975 "data_size": 63488 00:12:32.975 }, 00:12:32.975 { 00:12:32.975 "name": "BaseBdev2", 00:12:32.975 "uuid": "a492232f-b487-54fe-903d-56b1f3a929f5", 00:12:32.975 "is_configured": true, 00:12:32.975 "data_offset": 2048, 00:12:32.975 "data_size": 63488 00:12:32.975 }, 00:12:32.975 { 00:12:32.975 "name": "BaseBdev3", 00:12:32.975 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:32.975 "is_configured": true, 00:12:32.975 "data_offset": 2048, 00:12:32.975 "data_size": 63488 00:12:32.975 }, 00:12:32.975 { 00:12:32.975 "name": "BaseBdev4", 00:12:32.975 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:32.975 "is_configured": true, 00:12:32.975 "data_offset": 2048, 00:12:32.975 "data_size": 63488 00:12:32.975 } 00:12:32.975 ] 00:12:32.975 }' 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.975 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.975 [2024-12-07 01:56:38.338284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:32.975 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.975 Zero copy mechanism will not be used. 00:12:32.975 Running I/O for 60 seconds... 00:12:33.234 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.234 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.234 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.234 [2024-12-07 01:56:38.682922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.492 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.492 01:56:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:33.492 [2024-12-07 01:56:38.751452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:33.492 [2024-12-07 01:56:38.753492] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.492 [2024-12-07 01:56:38.873427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.492 [2024-12-07 01:56:38.874762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.751 [2024-12-07 01:56:39.081765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:33.751 [2024-12-07 01:56:39.082074] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.011 143.00 IOPS, 429.00 MiB/s [2024-12-07T01:56:39.473Z] [2024-12-07 01:56:39.417414] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:34.270 [2024-12-07 01:56:39.539717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.529 "name": "raid_bdev1", 00:12:34.529 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:34.529 "strip_size_kb": 0, 00:12:34.529 "state": "online", 00:12:34.529 "raid_level": "raid1", 00:12:34.529 "superblock": true, 00:12:34.529 "num_base_bdevs": 4, 00:12:34.529 "num_base_bdevs_discovered": 4, 00:12:34.529 "num_base_bdevs_operational": 4, 00:12:34.529 "process": { 00:12:34.529 "type": "rebuild", 00:12:34.529 "target": "spare", 00:12:34.529 "progress": { 00:12:34.529 "blocks": 12288, 00:12:34.529 "percent": 19 00:12:34.529 } 00:12:34.529 }, 00:12:34.529 "base_bdevs_list": [ 00:12:34.529 { 00:12:34.529 "name": "spare", 00:12:34.529 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:34.529 "is_configured": true, 00:12:34.529 "data_offset": 2048, 00:12:34.529 "data_size": 63488 00:12:34.529 }, 00:12:34.529 { 00:12:34.529 "name": "BaseBdev2", 00:12:34.529 "uuid": "a492232f-b487-54fe-903d-56b1f3a929f5", 00:12:34.529 "is_configured": true, 00:12:34.529 "data_offset": 2048, 00:12:34.529 "data_size": 63488 00:12:34.529 }, 00:12:34.529 { 00:12:34.529 "name": "BaseBdev3", 00:12:34.529 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:34.529 "is_configured": true, 00:12:34.529 "data_offset": 2048, 00:12:34.529 "data_size": 63488 00:12:34.529 }, 00:12:34.529 { 00:12:34.529 "name": "BaseBdev4", 00:12:34.529 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:34.529 "is_configured": true, 00:12:34.529 "data_offset": 2048, 00:12:34.529 "data_size": 63488 00:12:34.529 } 00:12:34.529 ] 00:12:34.529 }' 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.529 [2024-12-07 01:56:39.790103] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.529 [2024-12-07 01:56:39.893097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.529 [2024-12-07 01:56:39.907306] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.529 [2024-12-07 01:56:39.914058] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.529 [2024-12-07 01:56:39.923793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.529 [2024-12-07 01:56:39.923876] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.529 [2024-12-07 01:56:39.923906] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.529 [2024-12-07 01:56:39.934989] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.529 01:56:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.788 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.788 "name": "raid_bdev1", 00:12:34.788 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:34.788 "strip_size_kb": 0, 00:12:34.788 "state": "online", 00:12:34.788 "raid_level": "raid1", 00:12:34.788 "superblock": true, 00:12:34.788 "num_base_bdevs": 4, 00:12:34.788 "num_base_bdevs_discovered": 3, 00:12:34.788 "num_base_bdevs_operational": 3, 00:12:34.788 "base_bdevs_list": [ 00:12:34.788 { 00:12:34.788 "name": null, 00:12:34.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.788 "is_configured": false, 00:12:34.788 "data_offset": 0, 00:12:34.788 "data_size": 63488 00:12:34.788 }, 00:12:34.788 { 00:12:34.788 "name": "BaseBdev2", 00:12:34.788 "uuid": "a492232f-b487-54fe-903d-56b1f3a929f5", 00:12:34.788 "is_configured": true, 00:12:34.788 "data_offset": 2048, 00:12:34.788 "data_size": 63488 00:12:34.788 }, 00:12:34.788 { 00:12:34.788 "name": "BaseBdev3", 00:12:34.788 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:34.788 "is_configured": true, 00:12:34.788 "data_offset": 2048, 00:12:34.788 "data_size": 63488 00:12:34.788 }, 00:12:34.788 { 00:12:34.788 "name": "BaseBdev4", 00:12:34.788 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:34.788 "is_configured": true, 00:12:34.788 "data_offset": 2048, 00:12:34.788 "data_size": 63488 00:12:34.788 } 00:12:34.788 ] 00:12:34.788 }' 00:12:34.788 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.788 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.047 175.50 IOPS, 526.50 MiB/s [2024-12-07T01:56:40.509Z] 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.047 "name": "raid_bdev1", 00:12:35.047 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:35.047 "strip_size_kb": 0, 00:12:35.047 "state": "online", 00:12:35.047 "raid_level": "raid1", 00:12:35.047 "superblock": true, 00:12:35.047 "num_base_bdevs": 4, 00:12:35.047 "num_base_bdevs_discovered": 3, 00:12:35.047 "num_base_bdevs_operational": 3, 00:12:35.047 "base_bdevs_list": [ 00:12:35.047 { 00:12:35.047 "name": null, 00:12:35.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.047 "is_configured": false, 00:12:35.047 "data_offset": 0, 00:12:35.047 "data_size": 63488 00:12:35.047 }, 00:12:35.047 { 00:12:35.047 "name": "BaseBdev2", 00:12:35.047 "uuid": "a492232f-b487-54fe-903d-56b1f3a929f5", 00:12:35.047 "is_configured": true, 00:12:35.047 "data_offset": 2048, 00:12:35.047 "data_size": 63488 00:12:35.047 }, 00:12:35.047 { 00:12:35.047 "name": "BaseBdev3", 00:12:35.047 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:35.047 "is_configured": true, 00:12:35.047 "data_offset": 2048, 00:12:35.047 "data_size": 63488 00:12:35.047 }, 00:12:35.047 { 00:12:35.047 "name": "BaseBdev4", 00:12:35.047 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:35.047 "is_configured": true, 00:12:35.047 "data_offset": 2048, 00:12:35.047 "data_size": 63488 00:12:35.047 } 00:12:35.047 ] 00:12:35.047 }' 00:12:35.047 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.306 [2024-12-07 01:56:40.560714] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.306 01:56:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:35.306 [2024-12-07 01:56:40.597174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:35.306 [2024-12-07 01:56:40.599183] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.306 [2024-12-07 01:56:40.714237] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:35.306 [2024-12-07 01:56:40.714651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:35.564 [2024-12-07 01:56:40.838388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:35.564 [2024-12-07 01:56:40.839038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:35.822 [2024-12-07 01:56:41.172931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:36.081 154.67 IOPS, 464.00 MiB/s [2024-12-07T01:56:41.543Z] [2024-12-07 01:56:41.392972] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.081 [2024-12-07 01:56:41.393625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.339 "name": "raid_bdev1", 00:12:36.339 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:36.339 "strip_size_kb": 0, 00:12:36.339 "state": "online", 00:12:36.339 "raid_level": "raid1", 00:12:36.339 "superblock": true, 00:12:36.339 "num_base_bdevs": 4, 00:12:36.339 "num_base_bdevs_discovered": 4, 00:12:36.339 "num_base_bdevs_operational": 4, 00:12:36.339 "process": { 00:12:36.339 "type": "rebuild", 00:12:36.339 "target": "spare", 00:12:36.339 "progress": { 00:12:36.339 "blocks": 12288, 00:12:36.339 "percent": 19 00:12:36.339 } 00:12:36.339 }, 00:12:36.339 "base_bdevs_list": [ 00:12:36.339 { 00:12:36.339 "name": "spare", 00:12:36.339 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:36.339 "is_configured": true, 00:12:36.339 "data_offset": 2048, 00:12:36.339 "data_size": 63488 00:12:36.339 }, 00:12:36.339 { 00:12:36.339 "name": "BaseBdev2", 00:12:36.339 "uuid": "a492232f-b487-54fe-903d-56b1f3a929f5", 00:12:36.339 "is_configured": true, 00:12:36.339 "data_offset": 2048, 00:12:36.339 "data_size": 63488 00:12:36.339 }, 00:12:36.339 { 00:12:36.339 "name": "BaseBdev3", 00:12:36.339 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:36.339 "is_configured": true, 00:12:36.339 "data_offset": 2048, 00:12:36.339 "data_size": 63488 00:12:36.339 }, 00:12:36.339 { 00:12:36.339 "name": "BaseBdev4", 00:12:36.339 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:36.339 "is_configured": true, 00:12:36.339 "data_offset": 2048, 00:12:36.339 "data_size": 63488 00:12:36.339 } 00:12:36.339 ] 00:12:36.339 }' 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.339 [2024-12-07 01:56:41.715583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:36.339 [2024-12-07 01:56:41.716900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:36.339 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:36.340 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.340 01:56:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.340 [2024-12-07 01:56:41.743088] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:36.598 [2024-12-07 01:56:41.949082] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:36.857 [2024-12-07 01:56:42.158177] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:36.857 [2024-12-07 01:56:42.158280] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:36.857 [2024-12-07 01:56:42.170405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.857 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.858 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.858 "name": "raid_bdev1", 00:12:36.858 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:36.858 "strip_size_kb": 0, 00:12:36.858 "state": "online", 00:12:36.858 "raid_level": "raid1", 00:12:36.858 "superblock": true, 00:12:36.858 "num_base_bdevs": 4, 00:12:36.858 "num_base_bdevs_discovered": 3, 00:12:36.858 "num_base_bdevs_operational": 3, 00:12:36.858 "process": { 00:12:36.858 "type": "rebuild", 00:12:36.858 "target": "spare", 00:12:36.858 "progress": { 00:12:36.858 "blocks": 16384, 00:12:36.858 "percent": 25 00:12:36.858 } 00:12:36.858 }, 00:12:36.858 "base_bdevs_list": [ 00:12:36.858 { 00:12:36.858 "name": "spare", 00:12:36.858 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:36.858 "is_configured": true, 00:12:36.858 "data_offset": 2048, 00:12:36.858 "data_size": 63488 00:12:36.858 }, 00:12:36.858 { 00:12:36.858 "name": null, 00:12:36.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.858 "is_configured": false, 00:12:36.858 "data_offset": 0, 00:12:36.858 "data_size": 63488 00:12:36.858 }, 00:12:36.858 { 00:12:36.858 "name": "BaseBdev3", 00:12:36.858 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:36.858 "is_configured": true, 00:12:36.858 "data_offset": 2048, 00:12:36.858 "data_size": 63488 00:12:36.858 }, 00:12:36.858 { 00:12:36.858 "name": "BaseBdev4", 00:12:36.858 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:36.858 "is_configured": true, 00:12:36.858 "data_offset": 2048, 00:12:36.858 "data_size": 63488 00:12:36.858 } 00:12:36.858 ] 00:12:36.858 }' 00:12:36.858 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.858 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.858 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 129.50 IOPS, 388.50 MiB/s [2024-12-07T01:56:42.579Z] 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.117 "name": "raid_bdev1", 00:12:37.117 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:37.117 "strip_size_kb": 0, 00:12:37.117 "state": "online", 00:12:37.117 "raid_level": "raid1", 00:12:37.117 "superblock": true, 00:12:37.117 "num_base_bdevs": 4, 00:12:37.117 "num_base_bdevs_discovered": 3, 00:12:37.117 "num_base_bdevs_operational": 3, 00:12:37.117 "process": { 00:12:37.117 "type": "rebuild", 00:12:37.117 "target": "spare", 00:12:37.117 "progress": { 00:12:37.117 "blocks": 16384, 00:12:37.117 "percent": 25 00:12:37.117 } 00:12:37.117 }, 00:12:37.117 "base_bdevs_list": [ 00:12:37.117 { 00:12:37.117 "name": "spare", 00:12:37.117 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:37.117 "is_configured": true, 00:12:37.117 "data_offset": 2048, 00:12:37.117 "data_size": 63488 00:12:37.117 }, 00:12:37.117 { 00:12:37.117 "name": null, 00:12:37.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.117 "is_configured": false, 00:12:37.117 "data_offset": 0, 00:12:37.117 "data_size": 63488 00:12:37.117 }, 00:12:37.117 { 00:12:37.117 "name": "BaseBdev3", 00:12:37.117 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:37.117 "is_configured": true, 00:12:37.117 "data_offset": 2048, 00:12:37.117 "data_size": 63488 00:12:37.117 }, 00:12:37.117 { 00:12:37.117 "name": "BaseBdev4", 00:12:37.117 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:37.117 "is_configured": true, 00:12:37.117 "data_offset": 2048, 00:12:37.117 "data_size": 63488 00:12:37.117 } 00:12:37.117 ] 00:12:37.117 }' 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.117 01:56:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:37.375 [2024-12-07 01:56:42.652021] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:37.671 [2024-12-07 01:56:42.993894] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:37.930 [2024-12-07 01:56:43.210147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:38.188 114.60 IOPS, 343.80 MiB/s [2024-12-07T01:56:43.650Z] 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.188 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.188 "name": "raid_bdev1", 00:12:38.188 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:38.188 "strip_size_kb": 0, 00:12:38.188 "state": "online", 00:12:38.188 "raid_level": "raid1", 00:12:38.188 "superblock": true, 00:12:38.188 "num_base_bdevs": 4, 00:12:38.188 "num_base_bdevs_discovered": 3, 00:12:38.188 "num_base_bdevs_operational": 3, 00:12:38.189 "process": { 00:12:38.189 "type": "rebuild", 00:12:38.189 "target": "spare", 00:12:38.189 "progress": { 00:12:38.189 "blocks": 30720, 00:12:38.189 "percent": 48 00:12:38.189 } 00:12:38.189 }, 00:12:38.189 "base_bdevs_list": [ 00:12:38.189 { 00:12:38.189 "name": "spare", 00:12:38.189 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:38.189 "is_configured": true, 00:12:38.189 "data_offset": 2048, 00:12:38.189 "data_size": 63488 00:12:38.189 }, 00:12:38.189 { 00:12:38.189 "name": null, 00:12:38.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.189 "is_configured": false, 00:12:38.189 "data_offset": 0, 00:12:38.189 "data_size": 63488 00:12:38.189 }, 00:12:38.189 { 00:12:38.189 "name": "BaseBdev3", 00:12:38.189 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:38.189 "is_configured": true, 00:12:38.189 "data_offset": 2048, 00:12:38.189 "data_size": 63488 00:12:38.189 }, 00:12:38.189 { 00:12:38.189 "name": "BaseBdev4", 00:12:38.189 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:38.189 "is_configured": true, 00:12:38.189 "data_offset": 2048, 00:12:38.189 "data_size": 63488 00:12:38.189 } 00:12:38.189 ] 00:12:38.189 }' 00:12:38.189 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.189 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.189 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.189 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.189 01:56:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.447 [2024-12-07 01:56:43.800722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:38.705 [2024-12-07 01:56:44.034853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:39.225 101.33 IOPS, 304.00 MiB/s [2024-12-07T01:56:44.687Z] 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.225 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.225 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.225 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.225 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.226 "name": "raid_bdev1", 00:12:39.226 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:39.226 "strip_size_kb": 0, 00:12:39.226 "state": "online", 00:12:39.226 "raid_level": "raid1", 00:12:39.226 "superblock": true, 00:12:39.226 "num_base_bdevs": 4, 00:12:39.226 "num_base_bdevs_discovered": 3, 00:12:39.226 "num_base_bdevs_operational": 3, 00:12:39.226 "process": { 00:12:39.226 "type": "rebuild", 00:12:39.226 "target": "spare", 00:12:39.226 "progress": { 00:12:39.226 "blocks": 49152, 00:12:39.226 "percent": 77 00:12:39.226 } 00:12:39.226 }, 00:12:39.226 "base_bdevs_list": [ 00:12:39.226 { 00:12:39.226 "name": "spare", 00:12:39.226 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:39.226 "is_configured": true, 00:12:39.226 "data_offset": 2048, 00:12:39.226 "data_size": 63488 00:12:39.226 }, 00:12:39.226 { 00:12:39.226 "name": null, 00:12:39.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.226 "is_configured": false, 00:12:39.226 "data_offset": 0, 00:12:39.226 "data_size": 63488 00:12:39.226 }, 00:12:39.226 { 00:12:39.226 "name": "BaseBdev3", 00:12:39.226 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:39.226 "is_configured": true, 00:12:39.226 "data_offset": 2048, 00:12:39.226 "data_size": 63488 00:12:39.226 }, 00:12:39.226 { 00:12:39.226 "name": "BaseBdev4", 00:12:39.226 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:39.226 "is_configured": true, 00:12:39.226 "data_offset": 2048, 00:12:39.226 "data_size": 63488 00:12:39.226 } 00:12:39.226 ] 00:12:39.226 }' 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.226 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.484 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.484 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.484 01:56:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.742 [2024-12-07 01:56:45.019636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:40.000 [2024-12-07 01:56:45.234824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:40.258 91.86 IOPS, 275.57 MiB/s [2024-12-07T01:56:45.720Z] [2024-12-07 01:56:45.464860] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:40.258 [2024-12-07 01:56:45.569758] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:40.258 [2024-12-07 01:56:45.572650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.516 "name": "raid_bdev1", 00:12:40.516 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:40.516 "strip_size_kb": 0, 00:12:40.516 "state": "online", 00:12:40.516 "raid_level": "raid1", 00:12:40.516 "superblock": true, 00:12:40.516 "num_base_bdevs": 4, 00:12:40.516 "num_base_bdevs_discovered": 3, 00:12:40.516 "num_base_bdevs_operational": 3, 00:12:40.516 "base_bdevs_list": [ 00:12:40.516 { 00:12:40.516 "name": "spare", 00:12:40.516 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:40.516 "is_configured": true, 00:12:40.516 "data_offset": 2048, 00:12:40.516 "data_size": 63488 00:12:40.516 }, 00:12:40.516 { 00:12:40.516 "name": null, 00:12:40.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.516 "is_configured": false, 00:12:40.516 "data_offset": 0, 00:12:40.516 "data_size": 63488 00:12:40.516 }, 00:12:40.516 { 00:12:40.516 "name": "BaseBdev3", 00:12:40.516 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:40.516 "is_configured": true, 00:12:40.516 "data_offset": 2048, 00:12:40.516 "data_size": 63488 00:12:40.516 }, 00:12:40.516 { 00:12:40.516 "name": "BaseBdev4", 00:12:40.516 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:40.516 "is_configured": true, 00:12:40.516 "data_offset": 2048, 00:12:40.516 "data_size": 63488 00:12:40.516 } 00:12:40.516 ] 00:12:40.516 }' 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.516 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.516 "name": "raid_bdev1", 00:12:40.516 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:40.516 "strip_size_kb": 0, 00:12:40.516 "state": "online", 00:12:40.516 "raid_level": "raid1", 00:12:40.516 "superblock": true, 00:12:40.516 "num_base_bdevs": 4, 00:12:40.516 "num_base_bdevs_discovered": 3, 00:12:40.516 "num_base_bdevs_operational": 3, 00:12:40.516 "base_bdevs_list": [ 00:12:40.516 { 00:12:40.516 "name": "spare", 00:12:40.516 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:40.516 "is_configured": true, 00:12:40.516 "data_offset": 2048, 00:12:40.516 "data_size": 63488 00:12:40.516 }, 00:12:40.516 { 00:12:40.516 "name": null, 00:12:40.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.516 "is_configured": false, 00:12:40.516 "data_offset": 0, 00:12:40.516 "data_size": 63488 00:12:40.516 }, 00:12:40.516 { 00:12:40.516 "name": "BaseBdev3", 00:12:40.516 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:40.516 "is_configured": true, 00:12:40.516 "data_offset": 2048, 00:12:40.516 "data_size": 63488 00:12:40.517 }, 00:12:40.517 { 00:12:40.517 "name": "BaseBdev4", 00:12:40.517 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:40.517 "is_configured": true, 00:12:40.517 "data_offset": 2048, 00:12:40.517 "data_size": 63488 00:12:40.517 } 00:12:40.517 ] 00:12:40.517 }' 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.517 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.774 01:56:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.774 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.774 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.774 "name": "raid_bdev1", 00:12:40.774 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:40.774 "strip_size_kb": 0, 00:12:40.774 "state": "online", 00:12:40.774 "raid_level": "raid1", 00:12:40.774 "superblock": true, 00:12:40.774 "num_base_bdevs": 4, 00:12:40.774 "num_base_bdevs_discovered": 3, 00:12:40.774 "num_base_bdevs_operational": 3, 00:12:40.774 "base_bdevs_list": [ 00:12:40.774 { 00:12:40.774 "name": "spare", 00:12:40.774 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:40.774 "is_configured": true, 00:12:40.774 "data_offset": 2048, 00:12:40.774 "data_size": 63488 00:12:40.774 }, 00:12:40.774 { 00:12:40.774 "name": null, 00:12:40.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.774 "is_configured": false, 00:12:40.774 "data_offset": 0, 00:12:40.774 "data_size": 63488 00:12:40.774 }, 00:12:40.774 { 00:12:40.774 "name": "BaseBdev3", 00:12:40.774 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:40.774 "is_configured": true, 00:12:40.774 "data_offset": 2048, 00:12:40.774 "data_size": 63488 00:12:40.774 }, 00:12:40.774 { 00:12:40.774 "name": "BaseBdev4", 00:12:40.774 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:40.774 "is_configured": true, 00:12:40.774 "data_offset": 2048, 00:12:40.774 "data_size": 63488 00:12:40.774 } 00:12:40.774 ] 00:12:40.774 }' 00:12:40.774 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.774 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.032 85.62 IOPS, 256.88 MiB/s [2024-12-07T01:56:46.494Z] 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.032 [2024-12-07 01:56:46.417264] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.032 [2024-12-07 01:56:46.417292] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.032 00:12:41.032 Latency(us) 00:12:41.032 [2024-12-07T01:56:46.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.032 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:41.032 raid_bdev1 : 8.11 84.73 254.18 0.00 0.00 15990.00 271.87 110352.32 00:12:41.032 [2024-12-07T01:56:46.494Z] =================================================================================================================== 00:12:41.032 [2024-12-07T01:56:46.494Z] Total : 84.73 254.18 0.00 0.00 15990.00 271.87 110352.32 00:12:41.032 { 00:12:41.032 "results": [ 00:12:41.032 { 00:12:41.032 "job": "raid_bdev1", 00:12:41.032 "core_mask": "0x1", 00:12:41.032 "workload": "randrw", 00:12:41.032 "percentage": 50, 00:12:41.032 "status": "finished", 00:12:41.032 "queue_depth": 2, 00:12:41.032 "io_size": 3145728, 00:12:41.032 "runtime": 8.108364, 00:12:41.032 "iops": 84.72732600559127, 00:12:41.032 "mibps": 254.1819780167738, 00:12:41.032 "io_failed": 0, 00:12:41.032 "io_timeout": 0, 00:12:41.032 "avg_latency_us": 15990.003140036739, 00:12:41.032 "min_latency_us": 271.87423580786026, 00:12:41.032 "max_latency_us": 110352.32139737991 00:12:41.032 } 00:12:41.032 ], 00:12:41.032 "core_count": 1 00:12:41.032 } 00:12:41.032 [2024-12-07 01:56:46.436336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.032 [2024-12-07 01:56:46.436374] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.032 [2024-12-07 01:56:46.436477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.032 [2024-12-07 01:56:46.436492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.032 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:41.290 /dev/nbd0 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.290 1+0 records in 00:12:41.290 1+0 records out 00:12:41.290 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582127 s, 7.0 MB/s 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.290 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:41.548 /dev/nbd1 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.549 1+0 records in 00:12:41.549 1+0 records out 00:12:41.549 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000389156 s, 10.5 MB/s 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.549 01:56:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.808 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:42.067 /dev/nbd1 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.067 1+0 records in 00:12:42.067 1+0 records out 00:12:42.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401319 s, 10.2 MB/s 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:42.067 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.327 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.587 [2024-12-07 01:56:47.995963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.587 [2024-12-07 01:56:47.996023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.587 [2024-12-07 01:56:47.996049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:42.587 [2024-12-07 01:56:47.996057] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.587 [2024-12-07 01:56:47.998190] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.587 [2024-12-07 01:56:47.998272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.587 [2024-12-07 01:56:47.998366] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:42.587 [2024-12-07 01:56:47.998410] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.587 [2024-12-07 01:56:47.998527] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.587 [2024-12-07 01:56:47.998628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:42.587 spare 00:12:42.587 01:56:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.587 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:42.587 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.587 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.845 [2024-12-07 01:56:48.098533] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:42.845 [2024-12-07 01:56:48.098563] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.845 [2024-12-07 01:56:48.098895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:12:42.845 [2024-12-07 01:56:48.099055] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:42.845 [2024-12-07 01:56:48.099071] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:42.845 [2024-12-07 01:56:48.099210] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.845 "name": "raid_bdev1", 00:12:42.845 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:42.845 "strip_size_kb": 0, 00:12:42.845 "state": "online", 00:12:42.845 "raid_level": "raid1", 00:12:42.845 "superblock": true, 00:12:42.845 "num_base_bdevs": 4, 00:12:42.845 "num_base_bdevs_discovered": 3, 00:12:42.845 "num_base_bdevs_operational": 3, 00:12:42.845 "base_bdevs_list": [ 00:12:42.845 { 00:12:42.845 "name": "spare", 00:12:42.845 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:42.845 "is_configured": true, 00:12:42.845 "data_offset": 2048, 00:12:42.845 "data_size": 63488 00:12:42.845 }, 00:12:42.845 { 00:12:42.845 "name": null, 00:12:42.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.845 "is_configured": false, 00:12:42.845 "data_offset": 2048, 00:12:42.845 "data_size": 63488 00:12:42.845 }, 00:12:42.845 { 00:12:42.845 "name": "BaseBdev3", 00:12:42.845 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:42.845 "is_configured": true, 00:12:42.845 "data_offset": 2048, 00:12:42.845 "data_size": 63488 00:12:42.845 }, 00:12:42.845 { 00:12:42.845 "name": "BaseBdev4", 00:12:42.845 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:42.845 "is_configured": true, 00:12:42.845 "data_offset": 2048, 00:12:42.845 "data_size": 63488 00:12:42.845 } 00:12:42.845 ] 00:12:42.845 }' 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.845 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.411 "name": "raid_bdev1", 00:12:43.411 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:43.411 "strip_size_kb": 0, 00:12:43.411 "state": "online", 00:12:43.411 "raid_level": "raid1", 00:12:43.411 "superblock": true, 00:12:43.411 "num_base_bdevs": 4, 00:12:43.411 "num_base_bdevs_discovered": 3, 00:12:43.411 "num_base_bdevs_operational": 3, 00:12:43.411 "base_bdevs_list": [ 00:12:43.411 { 00:12:43.411 "name": "spare", 00:12:43.411 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:43.411 "is_configured": true, 00:12:43.411 "data_offset": 2048, 00:12:43.411 "data_size": 63488 00:12:43.411 }, 00:12:43.411 { 00:12:43.411 "name": null, 00:12:43.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.411 "is_configured": false, 00:12:43.411 "data_offset": 2048, 00:12:43.411 "data_size": 63488 00:12:43.411 }, 00:12:43.411 { 00:12:43.411 "name": "BaseBdev3", 00:12:43.411 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:43.411 "is_configured": true, 00:12:43.411 "data_offset": 2048, 00:12:43.411 "data_size": 63488 00:12:43.411 }, 00:12:43.411 { 00:12:43.411 "name": "BaseBdev4", 00:12:43.411 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:43.411 "is_configured": true, 00:12:43.411 "data_offset": 2048, 00:12:43.411 "data_size": 63488 00:12:43.411 } 00:12:43.411 ] 00:12:43.411 }' 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.411 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.412 [2024-12-07 01:56:48.762949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.412 "name": "raid_bdev1", 00:12:43.412 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:43.412 "strip_size_kb": 0, 00:12:43.412 "state": "online", 00:12:43.412 "raid_level": "raid1", 00:12:43.412 "superblock": true, 00:12:43.412 "num_base_bdevs": 4, 00:12:43.412 "num_base_bdevs_discovered": 2, 00:12:43.412 "num_base_bdevs_operational": 2, 00:12:43.412 "base_bdevs_list": [ 00:12:43.412 { 00:12:43.412 "name": null, 00:12:43.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.412 "is_configured": false, 00:12:43.412 "data_offset": 0, 00:12:43.412 "data_size": 63488 00:12:43.412 }, 00:12:43.412 { 00:12:43.412 "name": null, 00:12:43.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.412 "is_configured": false, 00:12:43.412 "data_offset": 2048, 00:12:43.412 "data_size": 63488 00:12:43.412 }, 00:12:43.412 { 00:12:43.412 "name": "BaseBdev3", 00:12:43.412 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:43.412 "is_configured": true, 00:12:43.412 "data_offset": 2048, 00:12:43.412 "data_size": 63488 00:12:43.412 }, 00:12:43.412 { 00:12:43.412 "name": "BaseBdev4", 00:12:43.412 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:43.412 "is_configured": true, 00:12:43.412 "data_offset": 2048, 00:12:43.412 "data_size": 63488 00:12:43.412 } 00:12:43.412 ] 00:12:43.412 }' 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.412 01:56:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 01:56:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.979 01:56:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.979 01:56:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.979 [2024-12-07 01:56:49.242211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.979 [2024-12-07 01:56:49.242425] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:43.979 [2024-12-07 01:56:49.242481] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:43.979 [2024-12-07 01:56:49.242560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.979 [2024-12-07 01:56:49.246202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:12:43.979 01:56:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.979 01:56:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:43.979 [2024-12-07 01:56:49.248125] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.916 "name": "raid_bdev1", 00:12:44.916 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:44.916 "strip_size_kb": 0, 00:12:44.916 "state": "online", 00:12:44.916 "raid_level": "raid1", 00:12:44.916 "superblock": true, 00:12:44.916 "num_base_bdevs": 4, 00:12:44.916 "num_base_bdevs_discovered": 3, 00:12:44.916 "num_base_bdevs_operational": 3, 00:12:44.916 "process": { 00:12:44.916 "type": "rebuild", 00:12:44.916 "target": "spare", 00:12:44.916 "progress": { 00:12:44.916 "blocks": 20480, 00:12:44.916 "percent": 32 00:12:44.916 } 00:12:44.916 }, 00:12:44.916 "base_bdevs_list": [ 00:12:44.916 { 00:12:44.916 "name": "spare", 00:12:44.916 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:44.916 "is_configured": true, 00:12:44.916 "data_offset": 2048, 00:12:44.916 "data_size": 63488 00:12:44.916 }, 00:12:44.916 { 00:12:44.916 "name": null, 00:12:44.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.916 "is_configured": false, 00:12:44.916 "data_offset": 2048, 00:12:44.916 "data_size": 63488 00:12:44.916 }, 00:12:44.916 { 00:12:44.916 "name": "BaseBdev3", 00:12:44.916 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:44.916 "is_configured": true, 00:12:44.916 "data_offset": 2048, 00:12:44.916 "data_size": 63488 00:12:44.916 }, 00:12:44.916 { 00:12:44.916 "name": "BaseBdev4", 00:12:44.916 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:44.916 "is_configured": true, 00:12:44.916 "data_offset": 2048, 00:12:44.916 "data_size": 63488 00:12:44.916 } 00:12:44.916 ] 00:12:44.916 }' 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.916 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.175 [2024-12-07 01:56:50.385232] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.175 [2024-12-07 01:56:50.452229] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:45.175 [2024-12-07 01:56:50.452333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.175 [2024-12-07 01:56:50.452372] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:45.175 [2024-12-07 01:56:50.452393] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.175 "name": "raid_bdev1", 00:12:45.175 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:45.175 "strip_size_kb": 0, 00:12:45.175 "state": "online", 00:12:45.175 "raid_level": "raid1", 00:12:45.175 "superblock": true, 00:12:45.175 "num_base_bdevs": 4, 00:12:45.175 "num_base_bdevs_discovered": 2, 00:12:45.175 "num_base_bdevs_operational": 2, 00:12:45.175 "base_bdevs_list": [ 00:12:45.175 { 00:12:45.175 "name": null, 00:12:45.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.175 "is_configured": false, 00:12:45.175 "data_offset": 0, 00:12:45.175 "data_size": 63488 00:12:45.175 }, 00:12:45.175 { 00:12:45.175 "name": null, 00:12:45.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.175 "is_configured": false, 00:12:45.175 "data_offset": 2048, 00:12:45.175 "data_size": 63488 00:12:45.175 }, 00:12:45.175 { 00:12:45.175 "name": "BaseBdev3", 00:12:45.175 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:45.175 "is_configured": true, 00:12:45.175 "data_offset": 2048, 00:12:45.175 "data_size": 63488 00:12:45.175 }, 00:12:45.175 { 00:12:45.175 "name": "BaseBdev4", 00:12:45.175 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:45.175 "is_configured": true, 00:12:45.175 "data_offset": 2048, 00:12:45.175 "data_size": 63488 00:12:45.175 } 00:12:45.175 ] 00:12:45.175 }' 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.175 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.741 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.741 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.741 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.741 [2024-12-07 01:56:50.939687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.741 [2024-12-07 01:56:50.939745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.741 [2024-12-07 01:56:50.939768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:45.741 [2024-12-07 01:56:50.939778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.741 [2024-12-07 01:56:50.940184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.741 [2024-12-07 01:56:50.940200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.741 [2024-12-07 01:56:50.940284] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:45.741 [2024-12-07 01:56:50.940295] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:45.741 [2024-12-07 01:56:50.940306] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:45.741 [2024-12-07 01:56:50.940323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.741 spare 00:12:45.741 [2024-12-07 01:56:50.943823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:12:45.741 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.741 01:56:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:45.741 [2024-12-07 01:56:50.945647] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.677 01:56:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.677 "name": "raid_bdev1", 00:12:46.677 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:46.677 "strip_size_kb": 0, 00:12:46.677 "state": "online", 00:12:46.677 "raid_level": "raid1", 00:12:46.677 "superblock": true, 00:12:46.677 "num_base_bdevs": 4, 00:12:46.677 "num_base_bdevs_discovered": 3, 00:12:46.677 "num_base_bdevs_operational": 3, 00:12:46.677 "process": { 00:12:46.677 "type": "rebuild", 00:12:46.677 "target": "spare", 00:12:46.677 "progress": { 00:12:46.677 "blocks": 20480, 00:12:46.677 "percent": 32 00:12:46.677 } 00:12:46.677 }, 00:12:46.677 "base_bdevs_list": [ 00:12:46.677 { 00:12:46.677 "name": "spare", 00:12:46.677 "uuid": "322f64eb-b046-5d40-8771-266454bfe957", 00:12:46.677 "is_configured": true, 00:12:46.677 "data_offset": 2048, 00:12:46.677 "data_size": 63488 00:12:46.677 }, 00:12:46.677 { 00:12:46.677 "name": null, 00:12:46.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.677 "is_configured": false, 00:12:46.677 "data_offset": 2048, 00:12:46.677 "data_size": 63488 00:12:46.677 }, 00:12:46.677 { 00:12:46.677 "name": "BaseBdev3", 00:12:46.677 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:46.677 "is_configured": true, 00:12:46.677 "data_offset": 2048, 00:12:46.677 "data_size": 63488 00:12:46.677 }, 00:12:46.677 { 00:12:46.677 "name": "BaseBdev4", 00:12:46.677 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:46.677 "is_configured": true, 00:12:46.677 "data_offset": 2048, 00:12:46.677 "data_size": 63488 00:12:46.677 } 00:12:46.677 ] 00:12:46.677 }' 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.677 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.677 [2024-12-07 01:56:52.107241] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.936 [2024-12-07 01:56:52.149710] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.936 [2024-12-07 01:56:52.149784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.936 [2024-12-07 01:56:52.149799] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.936 [2024-12-07 01:56:52.149809] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.936 "name": "raid_bdev1", 00:12:46.936 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:46.936 "strip_size_kb": 0, 00:12:46.936 "state": "online", 00:12:46.936 "raid_level": "raid1", 00:12:46.936 "superblock": true, 00:12:46.936 "num_base_bdevs": 4, 00:12:46.936 "num_base_bdevs_discovered": 2, 00:12:46.936 "num_base_bdevs_operational": 2, 00:12:46.936 "base_bdevs_list": [ 00:12:46.936 { 00:12:46.936 "name": null, 00:12:46.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.936 "is_configured": false, 00:12:46.936 "data_offset": 0, 00:12:46.936 "data_size": 63488 00:12:46.936 }, 00:12:46.936 { 00:12:46.936 "name": null, 00:12:46.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.936 "is_configured": false, 00:12:46.936 "data_offset": 2048, 00:12:46.936 "data_size": 63488 00:12:46.936 }, 00:12:46.936 { 00:12:46.936 "name": "BaseBdev3", 00:12:46.936 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:46.936 "is_configured": true, 00:12:46.936 "data_offset": 2048, 00:12:46.936 "data_size": 63488 00:12:46.936 }, 00:12:46.936 { 00:12:46.936 "name": "BaseBdev4", 00:12:46.936 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:46.936 "is_configured": true, 00:12:46.936 "data_offset": 2048, 00:12:46.936 "data_size": 63488 00:12:46.936 } 00:12:46.936 ] 00:12:46.936 }' 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.936 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.194 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.195 "name": "raid_bdev1", 00:12:47.195 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:47.195 "strip_size_kb": 0, 00:12:47.195 "state": "online", 00:12:47.195 "raid_level": "raid1", 00:12:47.195 "superblock": true, 00:12:47.195 "num_base_bdevs": 4, 00:12:47.195 "num_base_bdevs_discovered": 2, 00:12:47.195 "num_base_bdevs_operational": 2, 00:12:47.195 "base_bdevs_list": [ 00:12:47.195 { 00:12:47.195 "name": null, 00:12:47.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.195 "is_configured": false, 00:12:47.195 "data_offset": 0, 00:12:47.195 "data_size": 63488 00:12:47.195 }, 00:12:47.195 { 00:12:47.195 "name": null, 00:12:47.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.195 "is_configured": false, 00:12:47.195 "data_offset": 2048, 00:12:47.195 "data_size": 63488 00:12:47.195 }, 00:12:47.195 { 00:12:47.195 "name": "BaseBdev3", 00:12:47.195 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:47.195 "is_configured": true, 00:12:47.195 "data_offset": 2048, 00:12:47.195 "data_size": 63488 00:12:47.195 }, 00:12:47.195 { 00:12:47.195 "name": "BaseBdev4", 00:12:47.195 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:47.195 "is_configured": true, 00:12:47.195 "data_offset": 2048, 00:12:47.195 "data_size": 63488 00:12:47.195 } 00:12:47.195 ] 00:12:47.195 }' 00:12:47.195 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.456 [2024-12-07 01:56:52.732798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.456 [2024-12-07 01:56:52.732899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.456 [2024-12-07 01:56:52.732938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:47.456 [2024-12-07 01:56:52.732972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.456 [2024-12-07 01:56:52.733378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.456 [2024-12-07 01:56:52.733402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.456 [2024-12-07 01:56:52.733470] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:47.456 [2024-12-07 01:56:52.733494] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:47.456 [2024-12-07 01:56:52.733504] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:47.456 [2024-12-07 01:56:52.733519] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:47.456 BaseBdev1 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.456 01:56:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.389 "name": "raid_bdev1", 00:12:48.389 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:48.389 "strip_size_kb": 0, 00:12:48.389 "state": "online", 00:12:48.389 "raid_level": "raid1", 00:12:48.389 "superblock": true, 00:12:48.389 "num_base_bdevs": 4, 00:12:48.389 "num_base_bdevs_discovered": 2, 00:12:48.389 "num_base_bdevs_operational": 2, 00:12:48.389 "base_bdevs_list": [ 00:12:48.389 { 00:12:48.389 "name": null, 00:12:48.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.389 "is_configured": false, 00:12:48.389 "data_offset": 0, 00:12:48.389 "data_size": 63488 00:12:48.389 }, 00:12:48.389 { 00:12:48.389 "name": null, 00:12:48.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.389 "is_configured": false, 00:12:48.389 "data_offset": 2048, 00:12:48.389 "data_size": 63488 00:12:48.389 }, 00:12:48.389 { 00:12:48.389 "name": "BaseBdev3", 00:12:48.389 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:48.389 "is_configured": true, 00:12:48.389 "data_offset": 2048, 00:12:48.389 "data_size": 63488 00:12:48.389 }, 00:12:48.389 { 00:12:48.389 "name": "BaseBdev4", 00:12:48.389 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:48.389 "is_configured": true, 00:12:48.389 "data_offset": 2048, 00:12:48.389 "data_size": 63488 00:12:48.389 } 00:12:48.389 ] 00:12:48.389 }' 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.389 01:56:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.954 "name": "raid_bdev1", 00:12:48.954 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:48.954 "strip_size_kb": 0, 00:12:48.954 "state": "online", 00:12:48.954 "raid_level": "raid1", 00:12:48.954 "superblock": true, 00:12:48.954 "num_base_bdevs": 4, 00:12:48.954 "num_base_bdevs_discovered": 2, 00:12:48.954 "num_base_bdevs_operational": 2, 00:12:48.954 "base_bdevs_list": [ 00:12:48.954 { 00:12:48.954 "name": null, 00:12:48.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.954 "is_configured": false, 00:12:48.954 "data_offset": 0, 00:12:48.954 "data_size": 63488 00:12:48.954 }, 00:12:48.954 { 00:12:48.954 "name": null, 00:12:48.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.954 "is_configured": false, 00:12:48.954 "data_offset": 2048, 00:12:48.954 "data_size": 63488 00:12:48.954 }, 00:12:48.954 { 00:12:48.954 "name": "BaseBdev3", 00:12:48.954 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:48.954 "is_configured": true, 00:12:48.954 "data_offset": 2048, 00:12:48.954 "data_size": 63488 00:12:48.954 }, 00:12:48.954 { 00:12:48.954 "name": "BaseBdev4", 00:12:48.954 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:48.954 "is_configured": true, 00:12:48.954 "data_offset": 2048, 00:12:48.954 "data_size": 63488 00:12:48.954 } 00:12:48.954 ] 00:12:48.954 }' 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.954 [2024-12-07 01:56:54.314339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.954 [2024-12-07 01:56:54.314563] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:48.954 [2024-12-07 01:56:54.314627] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:48.954 request: 00:12:48.954 { 00:12:48.954 "base_bdev": "BaseBdev1", 00:12:48.954 "raid_bdev": "raid_bdev1", 00:12:48.954 "method": "bdev_raid_add_base_bdev", 00:12:48.954 "req_id": 1 00:12:48.954 } 00:12:48.954 Got JSON-RPC error response 00:12:48.954 response: 00:12:48.954 { 00:12:48.954 "code": -22, 00:12:48.954 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:48.954 } 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.954 01:56:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.889 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.147 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.147 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.147 "name": "raid_bdev1", 00:12:50.147 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:50.147 "strip_size_kb": 0, 00:12:50.147 "state": "online", 00:12:50.147 "raid_level": "raid1", 00:12:50.147 "superblock": true, 00:12:50.147 "num_base_bdevs": 4, 00:12:50.147 "num_base_bdevs_discovered": 2, 00:12:50.147 "num_base_bdevs_operational": 2, 00:12:50.147 "base_bdevs_list": [ 00:12:50.147 { 00:12:50.147 "name": null, 00:12:50.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.147 "is_configured": false, 00:12:50.147 "data_offset": 0, 00:12:50.147 "data_size": 63488 00:12:50.147 }, 00:12:50.147 { 00:12:50.147 "name": null, 00:12:50.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.147 "is_configured": false, 00:12:50.147 "data_offset": 2048, 00:12:50.147 "data_size": 63488 00:12:50.147 }, 00:12:50.147 { 00:12:50.147 "name": "BaseBdev3", 00:12:50.147 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:50.147 "is_configured": true, 00:12:50.147 "data_offset": 2048, 00:12:50.147 "data_size": 63488 00:12:50.147 }, 00:12:50.147 { 00:12:50.147 "name": "BaseBdev4", 00:12:50.147 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:50.147 "is_configured": true, 00:12:50.147 "data_offset": 2048, 00:12:50.147 "data_size": 63488 00:12:50.147 } 00:12:50.147 ] 00:12:50.147 }' 00:12:50.147 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.147 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.405 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.405 "name": "raid_bdev1", 00:12:50.405 "uuid": "e6b945a3-7ae2-469b-8f80-8565da238ed9", 00:12:50.405 "strip_size_kb": 0, 00:12:50.405 "state": "online", 00:12:50.405 "raid_level": "raid1", 00:12:50.405 "superblock": true, 00:12:50.405 "num_base_bdevs": 4, 00:12:50.405 "num_base_bdevs_discovered": 2, 00:12:50.405 "num_base_bdevs_operational": 2, 00:12:50.405 "base_bdevs_list": [ 00:12:50.405 { 00:12:50.405 "name": null, 00:12:50.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.405 "is_configured": false, 00:12:50.405 "data_offset": 0, 00:12:50.405 "data_size": 63488 00:12:50.406 }, 00:12:50.406 { 00:12:50.406 "name": null, 00:12:50.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.406 "is_configured": false, 00:12:50.406 "data_offset": 2048, 00:12:50.406 "data_size": 63488 00:12:50.406 }, 00:12:50.406 { 00:12:50.406 "name": "BaseBdev3", 00:12:50.406 "uuid": "7d0ddc30-cac0-57a5-bfcf-9077d0ec7776", 00:12:50.406 "is_configured": true, 00:12:50.406 "data_offset": 2048, 00:12:50.406 "data_size": 63488 00:12:50.406 }, 00:12:50.406 { 00:12:50.406 "name": "BaseBdev4", 00:12:50.406 "uuid": "b1824ccb-ed7d-5ef6-af6d-8316698d07c3", 00:12:50.406 "is_configured": true, 00:12:50.406 "data_offset": 2048, 00:12:50.406 "data_size": 63488 00:12:50.406 } 00:12:50.406 ] 00:12:50.406 }' 00:12:50.406 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.406 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.406 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89445 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 89445 ']' 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 89445 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89445 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.665 killing process with pid 89445 00:12:50.665 Received shutdown signal, test time was about 17.641621 seconds 00:12:50.665 00:12:50.665 Latency(us) 00:12:50.665 [2024-12-07T01:56:56.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.665 [2024-12-07T01:56:56.127Z] =================================================================================================================== 00:12:50.665 [2024-12-07T01:56:56.127Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89445' 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 89445 00:12:50.665 [2024-12-07 01:56:55.948068] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.665 [2024-12-07 01:56:55.948193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.665 01:56:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 89445 00:12:50.665 [2024-12-07 01:56:55.948265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.665 [2024-12-07 01:56:55.948275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:50.665 [2024-12-07 01:56:55.993468] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.925 01:56:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:50.925 00:12:50.925 real 0m19.620s 00:12:50.925 user 0m26.133s 00:12:50.925 sys 0m2.524s 00:12:50.925 01:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.925 ************************************ 00:12:50.925 END TEST raid_rebuild_test_sb_io 00:12:50.925 ************************************ 00:12:50.925 01:56:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.925 01:56:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:50.925 01:56:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:50.925 01:56:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:50.925 01:56:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.925 01:56:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.925 ************************************ 00:12:50.925 START TEST raid5f_state_function_test 00:12:50.925 ************************************ 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90153 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90153' 00:12:50.925 Process raid pid: 90153 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90153 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 90153 ']' 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.925 01:56:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.184 [2024-12-07 01:56:56.396500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:12:51.184 [2024-12-07 01:56:56.396632] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.184 [2024-12-07 01:56:56.543446] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.184 [2024-12-07 01:56:56.588087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.184 [2024-12-07 01:56:56.629363] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.184 [2024-12-07 01:56:56.629402] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.121 [2024-12-07 01:56:57.247228] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.121 [2024-12-07 01:56:57.247275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.121 [2024-12-07 01:56:57.247286] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.121 [2024-12-07 01:56:57.247296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.121 [2024-12-07 01:56:57.247302] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.121 [2024-12-07 01:56:57.247314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.121 "name": "Existed_Raid", 00:12:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.121 "strip_size_kb": 64, 00:12:52.121 "state": "configuring", 00:12:52.121 "raid_level": "raid5f", 00:12:52.121 "superblock": false, 00:12:52.121 "num_base_bdevs": 3, 00:12:52.121 "num_base_bdevs_discovered": 0, 00:12:52.121 "num_base_bdevs_operational": 3, 00:12:52.121 "base_bdevs_list": [ 00:12:52.121 { 00:12:52.121 "name": "BaseBdev1", 00:12:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.121 "is_configured": false, 00:12:52.121 "data_offset": 0, 00:12:52.121 "data_size": 0 00:12:52.121 }, 00:12:52.121 { 00:12:52.121 "name": "BaseBdev2", 00:12:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.121 "is_configured": false, 00:12:52.121 "data_offset": 0, 00:12:52.121 "data_size": 0 00:12:52.121 }, 00:12:52.121 { 00:12:52.121 "name": "BaseBdev3", 00:12:52.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.121 "is_configured": false, 00:12:52.121 "data_offset": 0, 00:12:52.121 "data_size": 0 00:12:52.121 } 00:12:52.121 ] 00:12:52.121 }' 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.121 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.380 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.380 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 [2024-12-07 01:56:57.702318] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.381 [2024-12-07 01:56:57.702365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 [2024-12-07 01:56:57.714325] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:52.381 [2024-12-07 01:56:57.714383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:52.381 [2024-12-07 01:56:57.714392] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.381 [2024-12-07 01:56:57.714401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.381 [2024-12-07 01:56:57.714407] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.381 [2024-12-07 01:56:57.714415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 [2024-12-07 01:56:57.735081] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.381 BaseBdev1 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 [ 00:12:52.381 { 00:12:52.381 "name": "BaseBdev1", 00:12:52.381 "aliases": [ 00:12:52.381 "8833e357-bc15-42f0-9c26-b3b106da66cb" 00:12:52.381 ], 00:12:52.381 "product_name": "Malloc disk", 00:12:52.381 "block_size": 512, 00:12:52.381 "num_blocks": 65536, 00:12:52.381 "uuid": "8833e357-bc15-42f0-9c26-b3b106da66cb", 00:12:52.381 "assigned_rate_limits": { 00:12:52.381 "rw_ios_per_sec": 0, 00:12:52.381 "rw_mbytes_per_sec": 0, 00:12:52.381 "r_mbytes_per_sec": 0, 00:12:52.381 "w_mbytes_per_sec": 0 00:12:52.381 }, 00:12:52.381 "claimed": true, 00:12:52.381 "claim_type": "exclusive_write", 00:12:52.381 "zoned": false, 00:12:52.381 "supported_io_types": { 00:12:52.381 "read": true, 00:12:52.381 "write": true, 00:12:52.381 "unmap": true, 00:12:52.381 "flush": true, 00:12:52.381 "reset": true, 00:12:52.381 "nvme_admin": false, 00:12:52.381 "nvme_io": false, 00:12:52.381 "nvme_io_md": false, 00:12:52.381 "write_zeroes": true, 00:12:52.381 "zcopy": true, 00:12:52.381 "get_zone_info": false, 00:12:52.381 "zone_management": false, 00:12:52.381 "zone_append": false, 00:12:52.381 "compare": false, 00:12:52.381 "compare_and_write": false, 00:12:52.381 "abort": true, 00:12:52.381 "seek_hole": false, 00:12:52.381 "seek_data": false, 00:12:52.381 "copy": true, 00:12:52.381 "nvme_iov_md": false 00:12:52.381 }, 00:12:52.381 "memory_domains": [ 00:12:52.381 { 00:12:52.381 "dma_device_id": "system", 00:12:52.381 "dma_device_type": 1 00:12:52.381 }, 00:12:52.381 { 00:12:52.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.381 "dma_device_type": 2 00:12:52.381 } 00:12:52.381 ], 00:12:52.381 "driver_specific": {} 00:12:52.381 } 00:12:52.381 ] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.381 "name": "Existed_Raid", 00:12:52.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.381 "strip_size_kb": 64, 00:12:52.381 "state": "configuring", 00:12:52.381 "raid_level": "raid5f", 00:12:52.381 "superblock": false, 00:12:52.381 "num_base_bdevs": 3, 00:12:52.381 "num_base_bdevs_discovered": 1, 00:12:52.381 "num_base_bdevs_operational": 3, 00:12:52.381 "base_bdevs_list": [ 00:12:52.381 { 00:12:52.381 "name": "BaseBdev1", 00:12:52.381 "uuid": "8833e357-bc15-42f0-9c26-b3b106da66cb", 00:12:52.381 "is_configured": true, 00:12:52.381 "data_offset": 0, 00:12:52.381 "data_size": 65536 00:12:52.381 }, 00:12:52.381 { 00:12:52.381 "name": "BaseBdev2", 00:12:52.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.381 "is_configured": false, 00:12:52.381 "data_offset": 0, 00:12:52.381 "data_size": 0 00:12:52.381 }, 00:12:52.381 { 00:12:52.381 "name": "BaseBdev3", 00:12:52.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.381 "is_configured": false, 00:12:52.381 "data_offset": 0, 00:12:52.381 "data_size": 0 00:12:52.381 } 00:12:52.381 ] 00:12:52.381 }' 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.381 01:56:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.949 [2024-12-07 01:56:58.226255] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.949 [2024-12-07 01:56:58.226304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.949 [2024-12-07 01:56:58.238292] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.949 [2024-12-07 01:56:58.240198] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.949 [2024-12-07 01:56:58.240243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.949 [2024-12-07 01:56:58.240253] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.949 [2024-12-07 01:56:58.240263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.949 "name": "Existed_Raid", 00:12:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.949 "strip_size_kb": 64, 00:12:52.949 "state": "configuring", 00:12:52.949 "raid_level": "raid5f", 00:12:52.949 "superblock": false, 00:12:52.949 "num_base_bdevs": 3, 00:12:52.949 "num_base_bdevs_discovered": 1, 00:12:52.949 "num_base_bdevs_operational": 3, 00:12:52.949 "base_bdevs_list": [ 00:12:52.949 { 00:12:52.949 "name": "BaseBdev1", 00:12:52.949 "uuid": "8833e357-bc15-42f0-9c26-b3b106da66cb", 00:12:52.949 "is_configured": true, 00:12:52.949 "data_offset": 0, 00:12:52.949 "data_size": 65536 00:12:52.949 }, 00:12:52.949 { 00:12:52.949 "name": "BaseBdev2", 00:12:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.949 "is_configured": false, 00:12:52.949 "data_offset": 0, 00:12:52.949 "data_size": 0 00:12:52.949 }, 00:12:52.949 { 00:12:52.949 "name": "BaseBdev3", 00:12:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.949 "is_configured": false, 00:12:52.949 "data_offset": 0, 00:12:52.949 "data_size": 0 00:12:52.949 } 00:12:52.949 ] 00:12:52.949 }' 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.949 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.210 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.210 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.210 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 [2024-12-07 01:56:58.670620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.469 BaseBdev2 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 [ 00:12:53.469 { 00:12:53.469 "name": "BaseBdev2", 00:12:53.469 "aliases": [ 00:12:53.469 "82e99f78-509e-4f3b-a337-e77642a26b54" 00:12:53.469 ], 00:12:53.469 "product_name": "Malloc disk", 00:12:53.469 "block_size": 512, 00:12:53.469 "num_blocks": 65536, 00:12:53.469 "uuid": "82e99f78-509e-4f3b-a337-e77642a26b54", 00:12:53.469 "assigned_rate_limits": { 00:12:53.469 "rw_ios_per_sec": 0, 00:12:53.469 "rw_mbytes_per_sec": 0, 00:12:53.469 "r_mbytes_per_sec": 0, 00:12:53.469 "w_mbytes_per_sec": 0 00:12:53.469 }, 00:12:53.469 "claimed": true, 00:12:53.469 "claim_type": "exclusive_write", 00:12:53.469 "zoned": false, 00:12:53.469 "supported_io_types": { 00:12:53.469 "read": true, 00:12:53.469 "write": true, 00:12:53.469 "unmap": true, 00:12:53.469 "flush": true, 00:12:53.469 "reset": true, 00:12:53.469 "nvme_admin": false, 00:12:53.469 "nvme_io": false, 00:12:53.469 "nvme_io_md": false, 00:12:53.469 "write_zeroes": true, 00:12:53.469 "zcopy": true, 00:12:53.469 "get_zone_info": false, 00:12:53.469 "zone_management": false, 00:12:53.469 "zone_append": false, 00:12:53.469 "compare": false, 00:12:53.469 "compare_and_write": false, 00:12:53.469 "abort": true, 00:12:53.469 "seek_hole": false, 00:12:53.469 "seek_data": false, 00:12:53.469 "copy": true, 00:12:53.469 "nvme_iov_md": false 00:12:53.469 }, 00:12:53.469 "memory_domains": [ 00:12:53.469 { 00:12:53.469 "dma_device_id": "system", 00:12:53.469 "dma_device_type": 1 00:12:53.469 }, 00:12:53.469 { 00:12:53.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.469 "dma_device_type": 2 00:12:53.469 } 00:12:53.469 ], 00:12:53.469 "driver_specific": {} 00:12:53.469 } 00:12:53.469 ] 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.469 "name": "Existed_Raid", 00:12:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.469 "strip_size_kb": 64, 00:12:53.469 "state": "configuring", 00:12:53.469 "raid_level": "raid5f", 00:12:53.469 "superblock": false, 00:12:53.469 "num_base_bdevs": 3, 00:12:53.469 "num_base_bdevs_discovered": 2, 00:12:53.469 "num_base_bdevs_operational": 3, 00:12:53.469 "base_bdevs_list": [ 00:12:53.469 { 00:12:53.469 "name": "BaseBdev1", 00:12:53.469 "uuid": "8833e357-bc15-42f0-9c26-b3b106da66cb", 00:12:53.469 "is_configured": true, 00:12:53.469 "data_offset": 0, 00:12:53.469 "data_size": 65536 00:12:53.469 }, 00:12:53.469 { 00:12:53.469 "name": "BaseBdev2", 00:12:53.469 "uuid": "82e99f78-509e-4f3b-a337-e77642a26b54", 00:12:53.469 "is_configured": true, 00:12:53.469 "data_offset": 0, 00:12:53.469 "data_size": 65536 00:12:53.469 }, 00:12:53.469 { 00:12:53.469 "name": "BaseBdev3", 00:12:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.469 "is_configured": false, 00:12:53.469 "data_offset": 0, 00:12:53.469 "data_size": 0 00:12:53.469 } 00:12:53.469 ] 00:12:53.469 }' 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.469 01:56:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.728 [2024-12-07 01:56:59.160765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.728 [2024-12-07 01:56:59.160819] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:53.728 [2024-12-07 01:56:59.160831] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:53.728 [2024-12-07 01:56:59.161094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:53.728 [2024-12-07 01:56:59.161563] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:53.728 [2024-12-07 01:56:59.161582] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:53.728 [2024-12-07 01:56:59.161806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.728 BaseBdev3 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.728 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.728 [ 00:12:53.728 { 00:12:53.728 "name": "BaseBdev3", 00:12:53.728 "aliases": [ 00:12:53.728 "14dfd48e-5c45-4ed0-bb64-096cc07b3cf4" 00:12:53.728 ], 00:12:53.728 "product_name": "Malloc disk", 00:12:53.728 "block_size": 512, 00:12:53.728 "num_blocks": 65536, 00:12:53.728 "uuid": "14dfd48e-5c45-4ed0-bb64-096cc07b3cf4", 00:12:53.728 "assigned_rate_limits": { 00:12:53.986 "rw_ios_per_sec": 0, 00:12:53.986 "rw_mbytes_per_sec": 0, 00:12:53.986 "r_mbytes_per_sec": 0, 00:12:53.986 "w_mbytes_per_sec": 0 00:12:53.986 }, 00:12:53.986 "claimed": true, 00:12:53.986 "claim_type": "exclusive_write", 00:12:53.986 "zoned": false, 00:12:53.986 "supported_io_types": { 00:12:53.986 "read": true, 00:12:53.986 "write": true, 00:12:53.986 "unmap": true, 00:12:53.986 "flush": true, 00:12:53.986 "reset": true, 00:12:53.986 "nvme_admin": false, 00:12:53.986 "nvme_io": false, 00:12:53.986 "nvme_io_md": false, 00:12:53.986 "write_zeroes": true, 00:12:53.986 "zcopy": true, 00:12:53.986 "get_zone_info": false, 00:12:53.986 "zone_management": false, 00:12:53.986 "zone_append": false, 00:12:53.986 "compare": false, 00:12:53.986 "compare_and_write": false, 00:12:53.986 "abort": true, 00:12:53.986 "seek_hole": false, 00:12:53.986 "seek_data": false, 00:12:53.986 "copy": true, 00:12:53.986 "nvme_iov_md": false 00:12:53.986 }, 00:12:53.986 "memory_domains": [ 00:12:53.986 { 00:12:53.986 "dma_device_id": "system", 00:12:53.986 "dma_device_type": 1 00:12:53.986 }, 00:12:53.986 { 00:12:53.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.986 "dma_device_type": 2 00:12:53.986 } 00:12:53.986 ], 00:12:53.986 "driver_specific": {} 00:12:53.986 } 00:12:53.986 ] 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.986 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.987 "name": "Existed_Raid", 00:12:53.987 "uuid": "3f254386-b11f-49e6-9fac-ed0850168f1f", 00:12:53.987 "strip_size_kb": 64, 00:12:53.987 "state": "online", 00:12:53.987 "raid_level": "raid5f", 00:12:53.987 "superblock": false, 00:12:53.987 "num_base_bdevs": 3, 00:12:53.987 "num_base_bdevs_discovered": 3, 00:12:53.987 "num_base_bdevs_operational": 3, 00:12:53.987 "base_bdevs_list": [ 00:12:53.987 { 00:12:53.987 "name": "BaseBdev1", 00:12:53.987 "uuid": "8833e357-bc15-42f0-9c26-b3b106da66cb", 00:12:53.987 "is_configured": true, 00:12:53.987 "data_offset": 0, 00:12:53.987 "data_size": 65536 00:12:53.987 }, 00:12:53.987 { 00:12:53.987 "name": "BaseBdev2", 00:12:53.987 "uuid": "82e99f78-509e-4f3b-a337-e77642a26b54", 00:12:53.987 "is_configured": true, 00:12:53.987 "data_offset": 0, 00:12:53.987 "data_size": 65536 00:12:53.987 }, 00:12:53.987 { 00:12:53.987 "name": "BaseBdev3", 00:12:53.987 "uuid": "14dfd48e-5c45-4ed0-bb64-096cc07b3cf4", 00:12:53.987 "is_configured": true, 00:12:53.987 "data_offset": 0, 00:12:53.987 "data_size": 65536 00:12:53.987 } 00:12:53.987 ] 00:12:53.987 }' 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.987 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.246 [2024-12-07 01:56:59.652189] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.246 "name": "Existed_Raid", 00:12:54.246 "aliases": [ 00:12:54.246 "3f254386-b11f-49e6-9fac-ed0850168f1f" 00:12:54.246 ], 00:12:54.246 "product_name": "Raid Volume", 00:12:54.246 "block_size": 512, 00:12:54.246 "num_blocks": 131072, 00:12:54.246 "uuid": "3f254386-b11f-49e6-9fac-ed0850168f1f", 00:12:54.246 "assigned_rate_limits": { 00:12:54.246 "rw_ios_per_sec": 0, 00:12:54.246 "rw_mbytes_per_sec": 0, 00:12:54.246 "r_mbytes_per_sec": 0, 00:12:54.246 "w_mbytes_per_sec": 0 00:12:54.246 }, 00:12:54.246 "claimed": false, 00:12:54.246 "zoned": false, 00:12:54.246 "supported_io_types": { 00:12:54.246 "read": true, 00:12:54.246 "write": true, 00:12:54.246 "unmap": false, 00:12:54.246 "flush": false, 00:12:54.246 "reset": true, 00:12:54.246 "nvme_admin": false, 00:12:54.246 "nvme_io": false, 00:12:54.246 "nvme_io_md": false, 00:12:54.246 "write_zeroes": true, 00:12:54.246 "zcopy": false, 00:12:54.246 "get_zone_info": false, 00:12:54.246 "zone_management": false, 00:12:54.246 "zone_append": false, 00:12:54.246 "compare": false, 00:12:54.246 "compare_and_write": false, 00:12:54.246 "abort": false, 00:12:54.246 "seek_hole": false, 00:12:54.246 "seek_data": false, 00:12:54.246 "copy": false, 00:12:54.246 "nvme_iov_md": false 00:12:54.246 }, 00:12:54.246 "driver_specific": { 00:12:54.246 "raid": { 00:12:54.246 "uuid": "3f254386-b11f-49e6-9fac-ed0850168f1f", 00:12:54.246 "strip_size_kb": 64, 00:12:54.246 "state": "online", 00:12:54.246 "raid_level": "raid5f", 00:12:54.246 "superblock": false, 00:12:54.246 "num_base_bdevs": 3, 00:12:54.246 "num_base_bdevs_discovered": 3, 00:12:54.246 "num_base_bdevs_operational": 3, 00:12:54.246 "base_bdevs_list": [ 00:12:54.246 { 00:12:54.246 "name": "BaseBdev1", 00:12:54.246 "uuid": "8833e357-bc15-42f0-9c26-b3b106da66cb", 00:12:54.246 "is_configured": true, 00:12:54.246 "data_offset": 0, 00:12:54.246 "data_size": 65536 00:12:54.246 }, 00:12:54.246 { 00:12:54.246 "name": "BaseBdev2", 00:12:54.246 "uuid": "82e99f78-509e-4f3b-a337-e77642a26b54", 00:12:54.246 "is_configured": true, 00:12:54.246 "data_offset": 0, 00:12:54.246 "data_size": 65536 00:12:54.246 }, 00:12:54.246 { 00:12:54.246 "name": "BaseBdev3", 00:12:54.246 "uuid": "14dfd48e-5c45-4ed0-bb64-096cc07b3cf4", 00:12:54.246 "is_configured": true, 00:12:54.246 "data_offset": 0, 00:12:54.246 "data_size": 65536 00:12:54.246 } 00:12:54.246 ] 00:12:54.246 } 00:12:54.246 } 00:12:54.246 }' 00:12:54.246 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:54.506 BaseBdev2 00:12:54.506 BaseBdev3' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.506 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.507 [2024-12-07 01:56:59.895595] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.507 "name": "Existed_Raid", 00:12:54.507 "uuid": "3f254386-b11f-49e6-9fac-ed0850168f1f", 00:12:54.507 "strip_size_kb": 64, 00:12:54.507 "state": "online", 00:12:54.507 "raid_level": "raid5f", 00:12:54.507 "superblock": false, 00:12:54.507 "num_base_bdevs": 3, 00:12:54.507 "num_base_bdevs_discovered": 2, 00:12:54.507 "num_base_bdevs_operational": 2, 00:12:54.507 "base_bdevs_list": [ 00:12:54.507 { 00:12:54.507 "name": null, 00:12:54.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.507 "is_configured": false, 00:12:54.507 "data_offset": 0, 00:12:54.507 "data_size": 65536 00:12:54.507 }, 00:12:54.507 { 00:12:54.507 "name": "BaseBdev2", 00:12:54.507 "uuid": "82e99f78-509e-4f3b-a337-e77642a26b54", 00:12:54.507 "is_configured": true, 00:12:54.507 "data_offset": 0, 00:12:54.507 "data_size": 65536 00:12:54.507 }, 00:12:54.507 { 00:12:54.507 "name": "BaseBdev3", 00:12:54.507 "uuid": "14dfd48e-5c45-4ed0-bb64-096cc07b3cf4", 00:12:54.507 "is_configured": true, 00:12:54.507 "data_offset": 0, 00:12:54.507 "data_size": 65536 00:12:54.507 } 00:12:54.507 ] 00:12:54.507 }' 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.507 01:56:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.074 [2024-12-07 01:57:00.402092] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.074 [2024-12-07 01:57:00.402190] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.074 [2024-12-07 01:57:00.413532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.074 [2024-12-07 01:57:00.473477] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:55.074 [2024-12-07 01:57:00.473529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.074 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 BaseBdev2 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.332 [ 00:12:55.332 { 00:12:55.332 "name": "BaseBdev2", 00:12:55.332 "aliases": [ 00:12:55.332 "83167244-39b3-442c-abb4-888b2323b072" 00:12:55.332 ], 00:12:55.332 "product_name": "Malloc disk", 00:12:55.332 "block_size": 512, 00:12:55.332 "num_blocks": 65536, 00:12:55.332 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:55.332 "assigned_rate_limits": { 00:12:55.332 "rw_ios_per_sec": 0, 00:12:55.332 "rw_mbytes_per_sec": 0, 00:12:55.332 "r_mbytes_per_sec": 0, 00:12:55.332 "w_mbytes_per_sec": 0 00:12:55.332 }, 00:12:55.332 "claimed": false, 00:12:55.332 "zoned": false, 00:12:55.332 "supported_io_types": { 00:12:55.332 "read": true, 00:12:55.332 "write": true, 00:12:55.332 "unmap": true, 00:12:55.332 "flush": true, 00:12:55.332 "reset": true, 00:12:55.332 "nvme_admin": false, 00:12:55.332 "nvme_io": false, 00:12:55.332 "nvme_io_md": false, 00:12:55.332 "write_zeroes": true, 00:12:55.332 "zcopy": true, 00:12:55.332 "get_zone_info": false, 00:12:55.332 "zone_management": false, 00:12:55.332 "zone_append": false, 00:12:55.332 "compare": false, 00:12:55.332 "compare_and_write": false, 00:12:55.332 "abort": true, 00:12:55.332 "seek_hole": false, 00:12:55.332 "seek_data": false, 00:12:55.332 "copy": true, 00:12:55.332 "nvme_iov_md": false 00:12:55.332 }, 00:12:55.332 "memory_domains": [ 00:12:55.332 { 00:12:55.332 "dma_device_id": "system", 00:12:55.332 "dma_device_type": 1 00:12:55.332 }, 00:12:55.332 { 00:12:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.332 "dma_device_type": 2 00:12:55.332 } 00:12:55.332 ], 00:12:55.332 "driver_specific": {} 00:12:55.332 } 00:12:55.332 ] 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.332 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.333 BaseBdev3 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.333 [ 00:12:55.333 { 00:12:55.333 "name": "BaseBdev3", 00:12:55.333 "aliases": [ 00:12:55.333 "aebdda90-3ec1-4fe1-afad-0e4fd8a55028" 00:12:55.333 ], 00:12:55.333 "product_name": "Malloc disk", 00:12:55.333 "block_size": 512, 00:12:55.333 "num_blocks": 65536, 00:12:55.333 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:55.333 "assigned_rate_limits": { 00:12:55.333 "rw_ios_per_sec": 0, 00:12:55.333 "rw_mbytes_per_sec": 0, 00:12:55.333 "r_mbytes_per_sec": 0, 00:12:55.333 "w_mbytes_per_sec": 0 00:12:55.333 }, 00:12:55.333 "claimed": false, 00:12:55.333 "zoned": false, 00:12:55.333 "supported_io_types": { 00:12:55.333 "read": true, 00:12:55.333 "write": true, 00:12:55.333 "unmap": true, 00:12:55.333 "flush": true, 00:12:55.333 "reset": true, 00:12:55.333 "nvme_admin": false, 00:12:55.333 "nvme_io": false, 00:12:55.333 "nvme_io_md": false, 00:12:55.333 "write_zeroes": true, 00:12:55.333 "zcopy": true, 00:12:55.333 "get_zone_info": false, 00:12:55.333 "zone_management": false, 00:12:55.333 "zone_append": false, 00:12:55.333 "compare": false, 00:12:55.333 "compare_and_write": false, 00:12:55.333 "abort": true, 00:12:55.333 "seek_hole": false, 00:12:55.333 "seek_data": false, 00:12:55.333 "copy": true, 00:12:55.333 "nvme_iov_md": false 00:12:55.333 }, 00:12:55.333 "memory_domains": [ 00:12:55.333 { 00:12:55.333 "dma_device_id": "system", 00:12:55.333 "dma_device_type": 1 00:12:55.333 }, 00:12:55.333 { 00:12:55.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.333 "dma_device_type": 2 00:12:55.333 } 00:12:55.333 ], 00:12:55.333 "driver_specific": {} 00:12:55.333 } 00:12:55.333 ] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.333 [2024-12-07 01:57:00.636675] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:55.333 [2024-12-07 01:57:00.636714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:55.333 [2024-12-07 01:57:00.636734] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:55.333 [2024-12-07 01:57:00.638512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.333 "name": "Existed_Raid", 00:12:55.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.333 "strip_size_kb": 64, 00:12:55.333 "state": "configuring", 00:12:55.333 "raid_level": "raid5f", 00:12:55.333 "superblock": false, 00:12:55.333 "num_base_bdevs": 3, 00:12:55.333 "num_base_bdevs_discovered": 2, 00:12:55.333 "num_base_bdevs_operational": 3, 00:12:55.333 "base_bdevs_list": [ 00:12:55.333 { 00:12:55.333 "name": "BaseBdev1", 00:12:55.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.333 "is_configured": false, 00:12:55.333 "data_offset": 0, 00:12:55.333 "data_size": 0 00:12:55.333 }, 00:12:55.333 { 00:12:55.333 "name": "BaseBdev2", 00:12:55.333 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:55.333 "is_configured": true, 00:12:55.333 "data_offset": 0, 00:12:55.333 "data_size": 65536 00:12:55.333 }, 00:12:55.333 { 00:12:55.333 "name": "BaseBdev3", 00:12:55.333 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:55.333 "is_configured": true, 00:12:55.333 "data_offset": 0, 00:12:55.333 "data_size": 65536 00:12:55.333 } 00:12:55.333 ] 00:12:55.333 }' 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.333 01:57:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.898 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:55.898 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.898 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.898 [2024-12-07 01:57:01.103828] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.898 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.899 "name": "Existed_Raid", 00:12:55.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.899 "strip_size_kb": 64, 00:12:55.899 "state": "configuring", 00:12:55.899 "raid_level": "raid5f", 00:12:55.899 "superblock": false, 00:12:55.899 "num_base_bdevs": 3, 00:12:55.899 "num_base_bdevs_discovered": 1, 00:12:55.899 "num_base_bdevs_operational": 3, 00:12:55.899 "base_bdevs_list": [ 00:12:55.899 { 00:12:55.899 "name": "BaseBdev1", 00:12:55.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.899 "is_configured": false, 00:12:55.899 "data_offset": 0, 00:12:55.899 "data_size": 0 00:12:55.899 }, 00:12:55.899 { 00:12:55.899 "name": null, 00:12:55.899 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:55.899 "is_configured": false, 00:12:55.899 "data_offset": 0, 00:12:55.899 "data_size": 65536 00:12:55.899 }, 00:12:55.899 { 00:12:55.899 "name": "BaseBdev3", 00:12:55.899 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:55.899 "is_configured": true, 00:12:55.899 "data_offset": 0, 00:12:55.899 "data_size": 65536 00:12:55.899 } 00:12:55.899 ] 00:12:55.899 }' 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.899 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.157 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.415 [2024-12-07 01:57:01.629796] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.415 BaseBdev1 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:56.415 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.416 [ 00:12:56.416 { 00:12:56.416 "name": "BaseBdev1", 00:12:56.416 "aliases": [ 00:12:56.416 "0c35e991-a5a4-4183-be42-34498b8e671e" 00:12:56.416 ], 00:12:56.416 "product_name": "Malloc disk", 00:12:56.416 "block_size": 512, 00:12:56.416 "num_blocks": 65536, 00:12:56.416 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:56.416 "assigned_rate_limits": { 00:12:56.416 "rw_ios_per_sec": 0, 00:12:56.416 "rw_mbytes_per_sec": 0, 00:12:56.416 "r_mbytes_per_sec": 0, 00:12:56.416 "w_mbytes_per_sec": 0 00:12:56.416 }, 00:12:56.416 "claimed": true, 00:12:56.416 "claim_type": "exclusive_write", 00:12:56.416 "zoned": false, 00:12:56.416 "supported_io_types": { 00:12:56.416 "read": true, 00:12:56.416 "write": true, 00:12:56.416 "unmap": true, 00:12:56.416 "flush": true, 00:12:56.416 "reset": true, 00:12:56.416 "nvme_admin": false, 00:12:56.416 "nvme_io": false, 00:12:56.416 "nvme_io_md": false, 00:12:56.416 "write_zeroes": true, 00:12:56.416 "zcopy": true, 00:12:56.416 "get_zone_info": false, 00:12:56.416 "zone_management": false, 00:12:56.416 "zone_append": false, 00:12:56.416 "compare": false, 00:12:56.416 "compare_and_write": false, 00:12:56.416 "abort": true, 00:12:56.416 "seek_hole": false, 00:12:56.416 "seek_data": false, 00:12:56.416 "copy": true, 00:12:56.416 "nvme_iov_md": false 00:12:56.416 }, 00:12:56.416 "memory_domains": [ 00:12:56.416 { 00:12:56.416 "dma_device_id": "system", 00:12:56.416 "dma_device_type": 1 00:12:56.416 }, 00:12:56.416 { 00:12:56.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.416 "dma_device_type": 2 00:12:56.416 } 00:12:56.416 ], 00:12:56.416 "driver_specific": {} 00:12:56.416 } 00:12:56.416 ] 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.416 "name": "Existed_Raid", 00:12:56.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.416 "strip_size_kb": 64, 00:12:56.416 "state": "configuring", 00:12:56.416 "raid_level": "raid5f", 00:12:56.416 "superblock": false, 00:12:56.416 "num_base_bdevs": 3, 00:12:56.416 "num_base_bdevs_discovered": 2, 00:12:56.416 "num_base_bdevs_operational": 3, 00:12:56.416 "base_bdevs_list": [ 00:12:56.416 { 00:12:56.416 "name": "BaseBdev1", 00:12:56.416 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:56.416 "is_configured": true, 00:12:56.416 "data_offset": 0, 00:12:56.416 "data_size": 65536 00:12:56.416 }, 00:12:56.416 { 00:12:56.416 "name": null, 00:12:56.416 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:56.416 "is_configured": false, 00:12:56.416 "data_offset": 0, 00:12:56.416 "data_size": 65536 00:12:56.416 }, 00:12:56.416 { 00:12:56.416 "name": "BaseBdev3", 00:12:56.416 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:56.416 "is_configured": true, 00:12:56.416 "data_offset": 0, 00:12:56.416 "data_size": 65536 00:12:56.416 } 00:12:56.416 ] 00:12:56.416 }' 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.416 01:57:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.713 [2024-12-07 01:57:02.152950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.713 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.975 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.975 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.975 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.975 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.975 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.975 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.975 "name": "Existed_Raid", 00:12:56.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.975 "strip_size_kb": 64, 00:12:56.975 "state": "configuring", 00:12:56.975 "raid_level": "raid5f", 00:12:56.975 "superblock": false, 00:12:56.975 "num_base_bdevs": 3, 00:12:56.975 "num_base_bdevs_discovered": 1, 00:12:56.975 "num_base_bdevs_operational": 3, 00:12:56.975 "base_bdevs_list": [ 00:12:56.975 { 00:12:56.975 "name": "BaseBdev1", 00:12:56.975 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:56.975 "is_configured": true, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": null, 00:12:56.975 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:56.975 "is_configured": false, 00:12:56.975 "data_offset": 0, 00:12:56.975 "data_size": 65536 00:12:56.975 }, 00:12:56.975 { 00:12:56.975 "name": null, 00:12:56.975 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:56.976 "is_configured": false, 00:12:56.976 "data_offset": 0, 00:12:56.976 "data_size": 65536 00:12:56.976 } 00:12:56.976 ] 00:12:56.976 }' 00:12:56.976 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.976 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.238 [2024-12-07 01:57:02.676095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.238 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.495 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.495 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.495 "name": "Existed_Raid", 00:12:57.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.495 "strip_size_kb": 64, 00:12:57.495 "state": "configuring", 00:12:57.495 "raid_level": "raid5f", 00:12:57.495 "superblock": false, 00:12:57.495 "num_base_bdevs": 3, 00:12:57.495 "num_base_bdevs_discovered": 2, 00:12:57.495 "num_base_bdevs_operational": 3, 00:12:57.495 "base_bdevs_list": [ 00:12:57.495 { 00:12:57.495 "name": "BaseBdev1", 00:12:57.495 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:57.495 "is_configured": true, 00:12:57.495 "data_offset": 0, 00:12:57.495 "data_size": 65536 00:12:57.495 }, 00:12:57.495 { 00:12:57.495 "name": null, 00:12:57.495 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:57.495 "is_configured": false, 00:12:57.495 "data_offset": 0, 00:12:57.495 "data_size": 65536 00:12:57.495 }, 00:12:57.495 { 00:12:57.495 "name": "BaseBdev3", 00:12:57.495 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:57.495 "is_configured": true, 00:12:57.495 "data_offset": 0, 00:12:57.495 "data_size": 65536 00:12:57.495 } 00:12:57.495 ] 00:12:57.495 }' 00:12:57.495 01:57:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.495 01:57:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.754 [2024-12-07 01:57:03.187234] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.754 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.011 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.011 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.011 "name": "Existed_Raid", 00:12:58.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.011 "strip_size_kb": 64, 00:12:58.011 "state": "configuring", 00:12:58.011 "raid_level": "raid5f", 00:12:58.011 "superblock": false, 00:12:58.011 "num_base_bdevs": 3, 00:12:58.011 "num_base_bdevs_discovered": 1, 00:12:58.011 "num_base_bdevs_operational": 3, 00:12:58.011 "base_bdevs_list": [ 00:12:58.011 { 00:12:58.011 "name": null, 00:12:58.011 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:58.011 "is_configured": false, 00:12:58.011 "data_offset": 0, 00:12:58.011 "data_size": 65536 00:12:58.011 }, 00:12:58.011 { 00:12:58.011 "name": null, 00:12:58.011 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:58.011 "is_configured": false, 00:12:58.011 "data_offset": 0, 00:12:58.011 "data_size": 65536 00:12:58.011 }, 00:12:58.011 { 00:12:58.011 "name": "BaseBdev3", 00:12:58.011 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:58.011 "is_configured": true, 00:12:58.011 "data_offset": 0, 00:12:58.011 "data_size": 65536 00:12:58.011 } 00:12:58.011 ] 00:12:58.011 }' 00:12:58.011 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.011 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 [2024-12-07 01:57:03.672914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.268 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.526 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.526 "name": "Existed_Raid", 00:12:58.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.526 "strip_size_kb": 64, 00:12:58.526 "state": "configuring", 00:12:58.526 "raid_level": "raid5f", 00:12:58.526 "superblock": false, 00:12:58.526 "num_base_bdevs": 3, 00:12:58.527 "num_base_bdevs_discovered": 2, 00:12:58.527 "num_base_bdevs_operational": 3, 00:12:58.527 "base_bdevs_list": [ 00:12:58.527 { 00:12:58.527 "name": null, 00:12:58.527 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:58.527 "is_configured": false, 00:12:58.527 "data_offset": 0, 00:12:58.527 "data_size": 65536 00:12:58.527 }, 00:12:58.527 { 00:12:58.527 "name": "BaseBdev2", 00:12:58.527 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:58.527 "is_configured": true, 00:12:58.527 "data_offset": 0, 00:12:58.527 "data_size": 65536 00:12:58.527 }, 00:12:58.527 { 00:12:58.527 "name": "BaseBdev3", 00:12:58.527 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:58.527 "is_configured": true, 00:12:58.527 "data_offset": 0, 00:12:58.527 "data_size": 65536 00:12:58.527 } 00:12:58.527 ] 00:12:58.527 }' 00:12:58.527 01:57:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.527 01:57:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0c35e991-a5a4-4183-be42-34498b8e671e 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.786 [2024-12-07 01:57:04.230755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:58.786 [2024-12-07 01:57:04.230819] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:58.786 [2024-12-07 01:57:04.230828] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:58.786 [2024-12-07 01:57:04.231082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:58.786 [2024-12-07 01:57:04.231490] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:58.786 [2024-12-07 01:57:04.231513] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:58.786 [2024-12-07 01:57:04.231698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.786 NewBaseBdev 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.786 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.045 [ 00:12:59.045 { 00:12:59.045 "name": "NewBaseBdev", 00:12:59.045 "aliases": [ 00:12:59.045 "0c35e991-a5a4-4183-be42-34498b8e671e" 00:12:59.045 ], 00:12:59.045 "product_name": "Malloc disk", 00:12:59.045 "block_size": 512, 00:12:59.045 "num_blocks": 65536, 00:12:59.045 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:59.045 "assigned_rate_limits": { 00:12:59.045 "rw_ios_per_sec": 0, 00:12:59.045 "rw_mbytes_per_sec": 0, 00:12:59.045 "r_mbytes_per_sec": 0, 00:12:59.045 "w_mbytes_per_sec": 0 00:12:59.045 }, 00:12:59.045 "claimed": true, 00:12:59.045 "claim_type": "exclusive_write", 00:12:59.045 "zoned": false, 00:12:59.045 "supported_io_types": { 00:12:59.045 "read": true, 00:12:59.045 "write": true, 00:12:59.045 "unmap": true, 00:12:59.045 "flush": true, 00:12:59.045 "reset": true, 00:12:59.045 "nvme_admin": false, 00:12:59.045 "nvme_io": false, 00:12:59.045 "nvme_io_md": false, 00:12:59.045 "write_zeroes": true, 00:12:59.045 "zcopy": true, 00:12:59.045 "get_zone_info": false, 00:12:59.045 "zone_management": false, 00:12:59.045 "zone_append": false, 00:12:59.045 "compare": false, 00:12:59.045 "compare_and_write": false, 00:12:59.045 "abort": true, 00:12:59.045 "seek_hole": false, 00:12:59.045 "seek_data": false, 00:12:59.045 "copy": true, 00:12:59.045 "nvme_iov_md": false 00:12:59.045 }, 00:12:59.045 "memory_domains": [ 00:12:59.045 { 00:12:59.045 "dma_device_id": "system", 00:12:59.045 "dma_device_type": 1 00:12:59.045 }, 00:12:59.045 { 00:12:59.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.045 "dma_device_type": 2 00:12:59.045 } 00:12:59.045 ], 00:12:59.045 "driver_specific": {} 00:12:59.045 } 00:12:59.045 ] 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.045 "name": "Existed_Raid", 00:12:59.045 "uuid": "e48d0b4c-9edd-4514-b4ee-a530558df8fa", 00:12:59.045 "strip_size_kb": 64, 00:12:59.045 "state": "online", 00:12:59.045 "raid_level": "raid5f", 00:12:59.045 "superblock": false, 00:12:59.045 "num_base_bdevs": 3, 00:12:59.045 "num_base_bdevs_discovered": 3, 00:12:59.045 "num_base_bdevs_operational": 3, 00:12:59.045 "base_bdevs_list": [ 00:12:59.045 { 00:12:59.045 "name": "NewBaseBdev", 00:12:59.045 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:59.045 "is_configured": true, 00:12:59.045 "data_offset": 0, 00:12:59.045 "data_size": 65536 00:12:59.045 }, 00:12:59.045 { 00:12:59.045 "name": "BaseBdev2", 00:12:59.045 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:59.045 "is_configured": true, 00:12:59.045 "data_offset": 0, 00:12:59.045 "data_size": 65536 00:12:59.045 }, 00:12:59.045 { 00:12:59.045 "name": "BaseBdev3", 00:12:59.045 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:59.045 "is_configured": true, 00:12:59.045 "data_offset": 0, 00:12:59.045 "data_size": 65536 00:12:59.045 } 00:12:59.045 ] 00:12:59.045 }' 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.045 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.304 [2024-12-07 01:57:04.750070] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.304 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.563 "name": "Existed_Raid", 00:12:59.563 "aliases": [ 00:12:59.563 "e48d0b4c-9edd-4514-b4ee-a530558df8fa" 00:12:59.563 ], 00:12:59.563 "product_name": "Raid Volume", 00:12:59.563 "block_size": 512, 00:12:59.563 "num_blocks": 131072, 00:12:59.563 "uuid": "e48d0b4c-9edd-4514-b4ee-a530558df8fa", 00:12:59.563 "assigned_rate_limits": { 00:12:59.563 "rw_ios_per_sec": 0, 00:12:59.563 "rw_mbytes_per_sec": 0, 00:12:59.563 "r_mbytes_per_sec": 0, 00:12:59.563 "w_mbytes_per_sec": 0 00:12:59.563 }, 00:12:59.563 "claimed": false, 00:12:59.563 "zoned": false, 00:12:59.563 "supported_io_types": { 00:12:59.563 "read": true, 00:12:59.563 "write": true, 00:12:59.563 "unmap": false, 00:12:59.563 "flush": false, 00:12:59.563 "reset": true, 00:12:59.563 "nvme_admin": false, 00:12:59.563 "nvme_io": false, 00:12:59.563 "nvme_io_md": false, 00:12:59.563 "write_zeroes": true, 00:12:59.563 "zcopy": false, 00:12:59.563 "get_zone_info": false, 00:12:59.563 "zone_management": false, 00:12:59.563 "zone_append": false, 00:12:59.563 "compare": false, 00:12:59.563 "compare_and_write": false, 00:12:59.563 "abort": false, 00:12:59.563 "seek_hole": false, 00:12:59.563 "seek_data": false, 00:12:59.563 "copy": false, 00:12:59.563 "nvme_iov_md": false 00:12:59.563 }, 00:12:59.563 "driver_specific": { 00:12:59.563 "raid": { 00:12:59.563 "uuid": "e48d0b4c-9edd-4514-b4ee-a530558df8fa", 00:12:59.563 "strip_size_kb": 64, 00:12:59.563 "state": "online", 00:12:59.563 "raid_level": "raid5f", 00:12:59.563 "superblock": false, 00:12:59.563 "num_base_bdevs": 3, 00:12:59.563 "num_base_bdevs_discovered": 3, 00:12:59.563 "num_base_bdevs_operational": 3, 00:12:59.563 "base_bdevs_list": [ 00:12:59.563 { 00:12:59.563 "name": "NewBaseBdev", 00:12:59.563 "uuid": "0c35e991-a5a4-4183-be42-34498b8e671e", 00:12:59.563 "is_configured": true, 00:12:59.563 "data_offset": 0, 00:12:59.563 "data_size": 65536 00:12:59.563 }, 00:12:59.563 { 00:12:59.563 "name": "BaseBdev2", 00:12:59.563 "uuid": "83167244-39b3-442c-abb4-888b2323b072", 00:12:59.563 "is_configured": true, 00:12:59.563 "data_offset": 0, 00:12:59.563 "data_size": 65536 00:12:59.563 }, 00:12:59.563 { 00:12:59.563 "name": "BaseBdev3", 00:12:59.563 "uuid": "aebdda90-3ec1-4fe1-afad-0e4fd8a55028", 00:12:59.563 "is_configured": true, 00:12:59.563 "data_offset": 0, 00:12:59.563 "data_size": 65536 00:12:59.563 } 00:12:59.563 ] 00:12:59.563 } 00:12:59.563 } 00:12:59.563 }' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:59.563 BaseBdev2 00:12:59.563 BaseBdev3' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.563 01:57:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.563 [2024-12-07 01:57:05.005431] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:59.564 [2024-12-07 01:57:05.005460] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.564 [2024-12-07 01:57:05.005523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.564 [2024-12-07 01:57:05.005766] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.564 [2024-12-07 01:57:05.005786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:59.564 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.564 01:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90153 00:12:59.564 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 90153 ']' 00:12:59.564 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 90153 00:12:59.564 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:59.564 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.827 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90153 00:12:59.827 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.827 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.827 killing process with pid 90153 00:12:59.827 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90153' 00:12:59.827 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 90153 00:12:59.827 [2024-12-07 01:57:05.057909] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:59.827 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 90153 00:12:59.827 [2024-12-07 01:57:05.088873] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:00.088 00:13:00.088 real 0m9.025s 00:13:00.088 user 0m15.438s 00:13:00.088 sys 0m1.875s 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.088 ************************************ 00:13:00.088 END TEST raid5f_state_function_test 00:13:00.088 ************************************ 00:13:00.088 01:57:05 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:13:00.088 01:57:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:00.088 01:57:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:00.088 01:57:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:00.088 ************************************ 00:13:00.088 START TEST raid5f_state_function_test_sb 00:13:00.088 ************************************ 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90755 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90755' 00:13:00.088 Process raid pid: 90755 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90755 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 90755 ']' 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:00.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:00.088 01:57:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.088 [2024-12-07 01:57:05.496503] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:00.088 [2024-12-07 01:57:05.497072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.347 [2024-12-07 01:57:05.625590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.347 [2024-12-07 01:57:05.668754] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.347 [2024-12-07 01:57:05.710009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.347 [2024-12-07 01:57:05.710055] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.913 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:00.913 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:00.913 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:00.913 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.913 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.913 [2024-12-07 01:57:06.330572] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.913 [2024-12-07 01:57:06.330616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.913 [2024-12-07 01:57:06.330628] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:00.914 [2024-12-07 01:57:06.330638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:00.914 [2024-12-07 01:57:06.330644] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:00.914 [2024-12-07 01:57:06.330657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.914 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.172 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.173 "name": "Existed_Raid", 00:13:01.173 "uuid": "d2c3120e-8f68-406e-af6d-83a6da5d1e86", 00:13:01.173 "strip_size_kb": 64, 00:13:01.173 "state": "configuring", 00:13:01.173 "raid_level": "raid5f", 00:13:01.173 "superblock": true, 00:13:01.173 "num_base_bdevs": 3, 00:13:01.173 "num_base_bdevs_discovered": 0, 00:13:01.173 "num_base_bdevs_operational": 3, 00:13:01.173 "base_bdevs_list": [ 00:13:01.173 { 00:13:01.173 "name": "BaseBdev1", 00:13:01.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.173 "is_configured": false, 00:13:01.173 "data_offset": 0, 00:13:01.173 "data_size": 0 00:13:01.173 }, 00:13:01.173 { 00:13:01.173 "name": "BaseBdev2", 00:13:01.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.173 "is_configured": false, 00:13:01.173 "data_offset": 0, 00:13:01.173 "data_size": 0 00:13:01.173 }, 00:13:01.173 { 00:13:01.173 "name": "BaseBdev3", 00:13:01.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.173 "is_configured": false, 00:13:01.173 "data_offset": 0, 00:13:01.173 "data_size": 0 00:13:01.173 } 00:13:01.173 ] 00:13:01.173 }' 00:13:01.173 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.173 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.432 [2024-12-07 01:57:06.789667] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.432 [2024-12-07 01:57:06.789718] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.432 [2024-12-07 01:57:06.797672] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:01.432 [2024-12-07 01:57:06.797716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:01.432 [2024-12-07 01:57:06.797724] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.432 [2024-12-07 01:57:06.797733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.432 [2024-12-07 01:57:06.797739] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.432 [2024-12-07 01:57:06.797748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.432 [2024-12-07 01:57:06.814317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.432 BaseBdev1 00:13:01.432 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 [ 00:13:01.433 { 00:13:01.433 "name": "BaseBdev1", 00:13:01.433 "aliases": [ 00:13:01.433 "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09" 00:13:01.433 ], 00:13:01.433 "product_name": "Malloc disk", 00:13:01.433 "block_size": 512, 00:13:01.433 "num_blocks": 65536, 00:13:01.433 "uuid": "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09", 00:13:01.433 "assigned_rate_limits": { 00:13:01.433 "rw_ios_per_sec": 0, 00:13:01.433 "rw_mbytes_per_sec": 0, 00:13:01.433 "r_mbytes_per_sec": 0, 00:13:01.433 "w_mbytes_per_sec": 0 00:13:01.433 }, 00:13:01.433 "claimed": true, 00:13:01.433 "claim_type": "exclusive_write", 00:13:01.433 "zoned": false, 00:13:01.433 "supported_io_types": { 00:13:01.433 "read": true, 00:13:01.433 "write": true, 00:13:01.433 "unmap": true, 00:13:01.433 "flush": true, 00:13:01.433 "reset": true, 00:13:01.433 "nvme_admin": false, 00:13:01.433 "nvme_io": false, 00:13:01.433 "nvme_io_md": false, 00:13:01.433 "write_zeroes": true, 00:13:01.433 "zcopy": true, 00:13:01.433 "get_zone_info": false, 00:13:01.433 "zone_management": false, 00:13:01.433 "zone_append": false, 00:13:01.433 "compare": false, 00:13:01.433 "compare_and_write": false, 00:13:01.433 "abort": true, 00:13:01.433 "seek_hole": false, 00:13:01.433 "seek_data": false, 00:13:01.433 "copy": true, 00:13:01.433 "nvme_iov_md": false 00:13:01.433 }, 00:13:01.433 "memory_domains": [ 00:13:01.433 { 00:13:01.433 "dma_device_id": "system", 00:13:01.433 "dma_device_type": 1 00:13:01.433 }, 00:13:01.433 { 00:13:01.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.433 "dma_device_type": 2 00:13:01.433 } 00:13:01.433 ], 00:13:01.433 "driver_specific": {} 00:13:01.433 } 00:13:01.433 ] 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.433 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.691 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.691 "name": "Existed_Raid", 00:13:01.691 "uuid": "88a831bc-a777-4dd1-be36-aa79914863fc", 00:13:01.691 "strip_size_kb": 64, 00:13:01.691 "state": "configuring", 00:13:01.691 "raid_level": "raid5f", 00:13:01.691 "superblock": true, 00:13:01.691 "num_base_bdevs": 3, 00:13:01.691 "num_base_bdevs_discovered": 1, 00:13:01.691 "num_base_bdevs_operational": 3, 00:13:01.691 "base_bdevs_list": [ 00:13:01.691 { 00:13:01.691 "name": "BaseBdev1", 00:13:01.691 "uuid": "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09", 00:13:01.691 "is_configured": true, 00:13:01.691 "data_offset": 2048, 00:13:01.691 "data_size": 63488 00:13:01.691 }, 00:13:01.691 { 00:13:01.691 "name": "BaseBdev2", 00:13:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.691 "is_configured": false, 00:13:01.691 "data_offset": 0, 00:13:01.691 "data_size": 0 00:13:01.691 }, 00:13:01.691 { 00:13:01.691 "name": "BaseBdev3", 00:13:01.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.691 "is_configured": false, 00:13:01.691 "data_offset": 0, 00:13:01.691 "data_size": 0 00:13:01.691 } 00:13:01.691 ] 00:13:01.691 }' 00:13:01.691 01:57:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.691 01:57:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.950 [2024-12-07 01:57:07.257601] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.950 [2024-12-07 01:57:07.257668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.950 [2024-12-07 01:57:07.269631] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.950 [2024-12-07 01:57:07.271452] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:01.950 [2024-12-07 01:57:07.271489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:01.950 [2024-12-07 01:57:07.271498] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:01.950 [2024-12-07 01:57:07.271508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.950 "name": "Existed_Raid", 00:13:01.950 "uuid": "f0a68423-f2f6-4e5d-8eff-92793d840b22", 00:13:01.950 "strip_size_kb": 64, 00:13:01.950 "state": "configuring", 00:13:01.950 "raid_level": "raid5f", 00:13:01.950 "superblock": true, 00:13:01.950 "num_base_bdevs": 3, 00:13:01.950 "num_base_bdevs_discovered": 1, 00:13:01.950 "num_base_bdevs_operational": 3, 00:13:01.950 "base_bdevs_list": [ 00:13:01.950 { 00:13:01.950 "name": "BaseBdev1", 00:13:01.950 "uuid": "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09", 00:13:01.950 "is_configured": true, 00:13:01.950 "data_offset": 2048, 00:13:01.950 "data_size": 63488 00:13:01.950 }, 00:13:01.950 { 00:13:01.950 "name": "BaseBdev2", 00:13:01.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.950 "is_configured": false, 00:13:01.950 "data_offset": 0, 00:13:01.950 "data_size": 0 00:13:01.950 }, 00:13:01.950 { 00:13:01.950 "name": "BaseBdev3", 00:13:01.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.950 "is_configured": false, 00:13:01.950 "data_offset": 0, 00:13:01.950 "data_size": 0 00:13:01.950 } 00:13:01.950 ] 00:13:01.950 }' 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.950 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.517 [2024-12-07 01:57:07.730354] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:02.517 BaseBdev2 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.517 [ 00:13:02.517 { 00:13:02.517 "name": "BaseBdev2", 00:13:02.517 "aliases": [ 00:13:02.517 "83d564b7-1042-4267-b95b-58187f8fdf70" 00:13:02.517 ], 00:13:02.517 "product_name": "Malloc disk", 00:13:02.517 "block_size": 512, 00:13:02.517 "num_blocks": 65536, 00:13:02.517 "uuid": "83d564b7-1042-4267-b95b-58187f8fdf70", 00:13:02.517 "assigned_rate_limits": { 00:13:02.517 "rw_ios_per_sec": 0, 00:13:02.517 "rw_mbytes_per_sec": 0, 00:13:02.517 "r_mbytes_per_sec": 0, 00:13:02.517 "w_mbytes_per_sec": 0 00:13:02.517 }, 00:13:02.517 "claimed": true, 00:13:02.517 "claim_type": "exclusive_write", 00:13:02.517 "zoned": false, 00:13:02.517 "supported_io_types": { 00:13:02.517 "read": true, 00:13:02.517 "write": true, 00:13:02.517 "unmap": true, 00:13:02.517 "flush": true, 00:13:02.517 "reset": true, 00:13:02.517 "nvme_admin": false, 00:13:02.517 "nvme_io": false, 00:13:02.517 "nvme_io_md": false, 00:13:02.517 "write_zeroes": true, 00:13:02.517 "zcopy": true, 00:13:02.517 "get_zone_info": false, 00:13:02.517 "zone_management": false, 00:13:02.517 "zone_append": false, 00:13:02.517 "compare": false, 00:13:02.517 "compare_and_write": false, 00:13:02.517 "abort": true, 00:13:02.517 "seek_hole": false, 00:13:02.517 "seek_data": false, 00:13:02.517 "copy": true, 00:13:02.517 "nvme_iov_md": false 00:13:02.517 }, 00:13:02.517 "memory_domains": [ 00:13:02.517 { 00:13:02.517 "dma_device_id": "system", 00:13:02.517 "dma_device_type": 1 00:13:02.517 }, 00:13:02.517 { 00:13:02.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.517 "dma_device_type": 2 00:13:02.517 } 00:13:02.517 ], 00:13:02.517 "driver_specific": {} 00:13:02.517 } 00:13:02.517 ] 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.517 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.518 "name": "Existed_Raid", 00:13:02.518 "uuid": "f0a68423-f2f6-4e5d-8eff-92793d840b22", 00:13:02.518 "strip_size_kb": 64, 00:13:02.518 "state": "configuring", 00:13:02.518 "raid_level": "raid5f", 00:13:02.518 "superblock": true, 00:13:02.518 "num_base_bdevs": 3, 00:13:02.518 "num_base_bdevs_discovered": 2, 00:13:02.518 "num_base_bdevs_operational": 3, 00:13:02.518 "base_bdevs_list": [ 00:13:02.518 { 00:13:02.518 "name": "BaseBdev1", 00:13:02.518 "uuid": "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09", 00:13:02.518 "is_configured": true, 00:13:02.518 "data_offset": 2048, 00:13:02.518 "data_size": 63488 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "name": "BaseBdev2", 00:13:02.518 "uuid": "83d564b7-1042-4267-b95b-58187f8fdf70", 00:13:02.518 "is_configured": true, 00:13:02.518 "data_offset": 2048, 00:13:02.518 "data_size": 63488 00:13:02.518 }, 00:13:02.518 { 00:13:02.518 "name": "BaseBdev3", 00:13:02.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.518 "is_configured": false, 00:13:02.518 "data_offset": 0, 00:13:02.518 "data_size": 0 00:13:02.518 } 00:13:02.518 ] 00:13:02.518 }' 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.518 01:57:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.777 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:02.777 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.777 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.777 [2024-12-07 01:57:08.236605] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:02.777 [2024-12-07 01:57:08.236922] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:02.777 [2024-12-07 01:57:08.236993] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:02.777 BaseBdev3 00:13:02.777 [2024-12-07 01:57:08.237291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:02.777 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.036 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:03.036 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:03.036 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:03.036 [2024-12-07 01:57:08.237868] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:03.036 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:03.036 [2024-12-07 01:57:08.237888] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:03.036 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:03.036 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:03.037 [2024-12-07 01:57:08.238011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.037 [ 00:13:03.037 { 00:13:03.037 "name": "BaseBdev3", 00:13:03.037 "aliases": [ 00:13:03.037 "4084714b-0208-436e-8cf5-5745369a99fc" 00:13:03.037 ], 00:13:03.037 "product_name": "Malloc disk", 00:13:03.037 "block_size": 512, 00:13:03.037 "num_blocks": 65536, 00:13:03.037 "uuid": "4084714b-0208-436e-8cf5-5745369a99fc", 00:13:03.037 "assigned_rate_limits": { 00:13:03.037 "rw_ios_per_sec": 0, 00:13:03.037 "rw_mbytes_per_sec": 0, 00:13:03.037 "r_mbytes_per_sec": 0, 00:13:03.037 "w_mbytes_per_sec": 0 00:13:03.037 }, 00:13:03.037 "claimed": true, 00:13:03.037 "claim_type": "exclusive_write", 00:13:03.037 "zoned": false, 00:13:03.037 "supported_io_types": { 00:13:03.037 "read": true, 00:13:03.037 "write": true, 00:13:03.037 "unmap": true, 00:13:03.037 "flush": true, 00:13:03.037 "reset": true, 00:13:03.037 "nvme_admin": false, 00:13:03.037 "nvme_io": false, 00:13:03.037 "nvme_io_md": false, 00:13:03.037 "write_zeroes": true, 00:13:03.037 "zcopy": true, 00:13:03.037 "get_zone_info": false, 00:13:03.037 "zone_management": false, 00:13:03.037 "zone_append": false, 00:13:03.037 "compare": false, 00:13:03.037 "compare_and_write": false, 00:13:03.037 "abort": true, 00:13:03.037 "seek_hole": false, 00:13:03.037 "seek_data": false, 00:13:03.037 "copy": true, 00:13:03.037 "nvme_iov_md": false 00:13:03.037 }, 00:13:03.037 "memory_domains": [ 00:13:03.037 { 00:13:03.037 "dma_device_id": "system", 00:13:03.037 "dma_device_type": 1 00:13:03.037 }, 00:13:03.037 { 00:13:03.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.037 "dma_device_type": 2 00:13:03.037 } 00:13:03.037 ], 00:13:03.037 "driver_specific": {} 00:13:03.037 } 00:13:03.037 ] 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.037 "name": "Existed_Raid", 00:13:03.037 "uuid": "f0a68423-f2f6-4e5d-8eff-92793d840b22", 00:13:03.037 "strip_size_kb": 64, 00:13:03.037 "state": "online", 00:13:03.037 "raid_level": "raid5f", 00:13:03.037 "superblock": true, 00:13:03.037 "num_base_bdevs": 3, 00:13:03.037 "num_base_bdevs_discovered": 3, 00:13:03.037 "num_base_bdevs_operational": 3, 00:13:03.037 "base_bdevs_list": [ 00:13:03.037 { 00:13:03.037 "name": "BaseBdev1", 00:13:03.037 "uuid": "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09", 00:13:03.037 "is_configured": true, 00:13:03.037 "data_offset": 2048, 00:13:03.037 "data_size": 63488 00:13:03.037 }, 00:13:03.037 { 00:13:03.037 "name": "BaseBdev2", 00:13:03.037 "uuid": "83d564b7-1042-4267-b95b-58187f8fdf70", 00:13:03.037 "is_configured": true, 00:13:03.037 "data_offset": 2048, 00:13:03.037 "data_size": 63488 00:13:03.037 }, 00:13:03.037 { 00:13:03.037 "name": "BaseBdev3", 00:13:03.037 "uuid": "4084714b-0208-436e-8cf5-5745369a99fc", 00:13:03.037 "is_configured": true, 00:13:03.037 "data_offset": 2048, 00:13:03.037 "data_size": 63488 00:13:03.037 } 00:13:03.037 ] 00:13:03.037 }' 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.037 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.296 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 [2024-12-07 01:57:08.743990] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.555 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.555 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:03.555 "name": "Existed_Raid", 00:13:03.555 "aliases": [ 00:13:03.555 "f0a68423-f2f6-4e5d-8eff-92793d840b22" 00:13:03.555 ], 00:13:03.555 "product_name": "Raid Volume", 00:13:03.555 "block_size": 512, 00:13:03.555 "num_blocks": 126976, 00:13:03.555 "uuid": "f0a68423-f2f6-4e5d-8eff-92793d840b22", 00:13:03.555 "assigned_rate_limits": { 00:13:03.555 "rw_ios_per_sec": 0, 00:13:03.555 "rw_mbytes_per_sec": 0, 00:13:03.555 "r_mbytes_per_sec": 0, 00:13:03.555 "w_mbytes_per_sec": 0 00:13:03.555 }, 00:13:03.555 "claimed": false, 00:13:03.555 "zoned": false, 00:13:03.555 "supported_io_types": { 00:13:03.555 "read": true, 00:13:03.555 "write": true, 00:13:03.555 "unmap": false, 00:13:03.555 "flush": false, 00:13:03.555 "reset": true, 00:13:03.555 "nvme_admin": false, 00:13:03.555 "nvme_io": false, 00:13:03.555 "nvme_io_md": false, 00:13:03.555 "write_zeroes": true, 00:13:03.555 "zcopy": false, 00:13:03.555 "get_zone_info": false, 00:13:03.555 "zone_management": false, 00:13:03.555 "zone_append": false, 00:13:03.555 "compare": false, 00:13:03.555 "compare_and_write": false, 00:13:03.555 "abort": false, 00:13:03.555 "seek_hole": false, 00:13:03.555 "seek_data": false, 00:13:03.555 "copy": false, 00:13:03.555 "nvme_iov_md": false 00:13:03.555 }, 00:13:03.555 "driver_specific": { 00:13:03.555 "raid": { 00:13:03.555 "uuid": "f0a68423-f2f6-4e5d-8eff-92793d840b22", 00:13:03.555 "strip_size_kb": 64, 00:13:03.555 "state": "online", 00:13:03.555 "raid_level": "raid5f", 00:13:03.555 "superblock": true, 00:13:03.555 "num_base_bdevs": 3, 00:13:03.555 "num_base_bdevs_discovered": 3, 00:13:03.555 "num_base_bdevs_operational": 3, 00:13:03.555 "base_bdevs_list": [ 00:13:03.555 { 00:13:03.555 "name": "BaseBdev1", 00:13:03.555 "uuid": "2e4f6d06-f9e6-4fa4-b71c-58b6c6419b09", 00:13:03.555 "is_configured": true, 00:13:03.555 "data_offset": 2048, 00:13:03.555 "data_size": 63488 00:13:03.555 }, 00:13:03.555 { 00:13:03.555 "name": "BaseBdev2", 00:13:03.555 "uuid": "83d564b7-1042-4267-b95b-58187f8fdf70", 00:13:03.555 "is_configured": true, 00:13:03.555 "data_offset": 2048, 00:13:03.555 "data_size": 63488 00:13:03.555 }, 00:13:03.555 { 00:13:03.555 "name": "BaseBdev3", 00:13:03.555 "uuid": "4084714b-0208-436e-8cf5-5745369a99fc", 00:13:03.555 "is_configured": true, 00:13:03.555 "data_offset": 2048, 00:13:03.555 "data_size": 63488 00:13:03.555 } 00:13:03.555 ] 00:13:03.555 } 00:13:03.555 } 00:13:03.556 }' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:03.556 BaseBdev2 00:13:03.556 BaseBdev3' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.556 [2024-12-07 01:57:08.975437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.556 01:57:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.556 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.815 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.815 "name": "Existed_Raid", 00:13:03.815 "uuid": "f0a68423-f2f6-4e5d-8eff-92793d840b22", 00:13:03.815 "strip_size_kb": 64, 00:13:03.815 "state": "online", 00:13:03.815 "raid_level": "raid5f", 00:13:03.815 "superblock": true, 00:13:03.815 "num_base_bdevs": 3, 00:13:03.815 "num_base_bdevs_discovered": 2, 00:13:03.815 "num_base_bdevs_operational": 2, 00:13:03.815 "base_bdevs_list": [ 00:13:03.815 { 00:13:03.815 "name": null, 00:13:03.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.815 "is_configured": false, 00:13:03.815 "data_offset": 0, 00:13:03.815 "data_size": 63488 00:13:03.815 }, 00:13:03.815 { 00:13:03.815 "name": "BaseBdev2", 00:13:03.815 "uuid": "83d564b7-1042-4267-b95b-58187f8fdf70", 00:13:03.815 "is_configured": true, 00:13:03.815 "data_offset": 2048, 00:13:03.815 "data_size": 63488 00:13:03.815 }, 00:13:03.815 { 00:13:03.815 "name": "BaseBdev3", 00:13:03.815 "uuid": "4084714b-0208-436e-8cf5-5745369a99fc", 00:13:03.815 "is_configured": true, 00:13:03.815 "data_offset": 2048, 00:13:03.815 "data_size": 63488 00:13:03.815 } 00:13:03.815 ] 00:13:03.815 }' 00:13:03.815 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.815 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.074 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 [2024-12-07 01:57:09.517732] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.074 [2024-12-07 01:57:09.517920] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:04.074 [2024-12-07 01:57:09.528967] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:04.075 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.075 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.075 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.334 [2024-12-07 01:57:09.588878] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.334 [2024-12-07 01:57:09.588922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:04.334 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 BaseBdev2 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 [ 00:13:04.335 { 00:13:04.335 "name": "BaseBdev2", 00:13:04.335 "aliases": [ 00:13:04.335 "95ced154-8eeb-45e9-a28d-98f5274a9c10" 00:13:04.335 ], 00:13:04.335 "product_name": "Malloc disk", 00:13:04.335 "block_size": 512, 00:13:04.335 "num_blocks": 65536, 00:13:04.335 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:04.335 "assigned_rate_limits": { 00:13:04.335 "rw_ios_per_sec": 0, 00:13:04.335 "rw_mbytes_per_sec": 0, 00:13:04.335 "r_mbytes_per_sec": 0, 00:13:04.335 "w_mbytes_per_sec": 0 00:13:04.335 }, 00:13:04.335 "claimed": false, 00:13:04.335 "zoned": false, 00:13:04.335 "supported_io_types": { 00:13:04.335 "read": true, 00:13:04.335 "write": true, 00:13:04.335 "unmap": true, 00:13:04.335 "flush": true, 00:13:04.335 "reset": true, 00:13:04.335 "nvme_admin": false, 00:13:04.335 "nvme_io": false, 00:13:04.335 "nvme_io_md": false, 00:13:04.335 "write_zeroes": true, 00:13:04.335 "zcopy": true, 00:13:04.335 "get_zone_info": false, 00:13:04.335 "zone_management": false, 00:13:04.335 "zone_append": false, 00:13:04.335 "compare": false, 00:13:04.335 "compare_and_write": false, 00:13:04.335 "abort": true, 00:13:04.335 "seek_hole": false, 00:13:04.335 "seek_data": false, 00:13:04.335 "copy": true, 00:13:04.335 "nvme_iov_md": false 00:13:04.335 }, 00:13:04.335 "memory_domains": [ 00:13:04.335 { 00:13:04.335 "dma_device_id": "system", 00:13:04.335 "dma_device_type": 1 00:13:04.335 }, 00:13:04.335 { 00:13:04.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.335 "dma_device_type": 2 00:13:04.335 } 00:13:04.335 ], 00:13:04.335 "driver_specific": {} 00:13:04.335 } 00:13:04.335 ] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 BaseBdev3 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 [ 00:13:04.335 { 00:13:04.335 "name": "BaseBdev3", 00:13:04.335 "aliases": [ 00:13:04.335 "2c688688-4245-4270-b2f8-b145b859cbb8" 00:13:04.335 ], 00:13:04.335 "product_name": "Malloc disk", 00:13:04.335 "block_size": 512, 00:13:04.335 "num_blocks": 65536, 00:13:04.335 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:04.335 "assigned_rate_limits": { 00:13:04.335 "rw_ios_per_sec": 0, 00:13:04.335 "rw_mbytes_per_sec": 0, 00:13:04.335 "r_mbytes_per_sec": 0, 00:13:04.335 "w_mbytes_per_sec": 0 00:13:04.335 }, 00:13:04.335 "claimed": false, 00:13:04.335 "zoned": false, 00:13:04.335 "supported_io_types": { 00:13:04.335 "read": true, 00:13:04.335 "write": true, 00:13:04.335 "unmap": true, 00:13:04.335 "flush": true, 00:13:04.335 "reset": true, 00:13:04.335 "nvme_admin": false, 00:13:04.335 "nvme_io": false, 00:13:04.335 "nvme_io_md": false, 00:13:04.335 "write_zeroes": true, 00:13:04.335 "zcopy": true, 00:13:04.335 "get_zone_info": false, 00:13:04.335 "zone_management": false, 00:13:04.335 "zone_append": false, 00:13:04.335 "compare": false, 00:13:04.335 "compare_and_write": false, 00:13:04.335 "abort": true, 00:13:04.335 "seek_hole": false, 00:13:04.335 "seek_data": false, 00:13:04.335 "copy": true, 00:13:04.335 "nvme_iov_md": false 00:13:04.335 }, 00:13:04.335 "memory_domains": [ 00:13:04.335 { 00:13:04.335 "dma_device_id": "system", 00:13:04.335 "dma_device_type": 1 00:13:04.335 }, 00:13:04.335 { 00:13:04.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:04.335 "dma_device_type": 2 00:13:04.335 } 00:13:04.335 ], 00:13:04.335 "driver_specific": {} 00:13:04.335 } 00:13:04.335 ] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.335 [2024-12-07 01:57:09.763340] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:04.335 [2024-12-07 01:57:09.763419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:04.335 [2024-12-07 01:57:09.763458] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.335 [2024-12-07 01:57:09.765266] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.335 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.336 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.594 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.594 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.594 "name": "Existed_Raid", 00:13:04.594 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:04.594 "strip_size_kb": 64, 00:13:04.594 "state": "configuring", 00:13:04.594 "raid_level": "raid5f", 00:13:04.594 "superblock": true, 00:13:04.594 "num_base_bdevs": 3, 00:13:04.594 "num_base_bdevs_discovered": 2, 00:13:04.594 "num_base_bdevs_operational": 3, 00:13:04.594 "base_bdevs_list": [ 00:13:04.594 { 00:13:04.594 "name": "BaseBdev1", 00:13:04.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.594 "is_configured": false, 00:13:04.594 "data_offset": 0, 00:13:04.594 "data_size": 0 00:13:04.594 }, 00:13:04.594 { 00:13:04.594 "name": "BaseBdev2", 00:13:04.594 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:04.594 "is_configured": true, 00:13:04.594 "data_offset": 2048, 00:13:04.594 "data_size": 63488 00:13:04.594 }, 00:13:04.594 { 00:13:04.594 "name": "BaseBdev3", 00:13:04.594 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:04.594 "is_configured": true, 00:13:04.594 "data_offset": 2048, 00:13:04.594 "data_size": 63488 00:13:04.594 } 00:13:04.594 ] 00:13:04.594 }' 00:13:04.594 01:57:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.594 01:57:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.851 [2024-12-07 01:57:10.254479] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.851 "name": "Existed_Raid", 00:13:04.851 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:04.851 "strip_size_kb": 64, 00:13:04.851 "state": "configuring", 00:13:04.851 "raid_level": "raid5f", 00:13:04.851 "superblock": true, 00:13:04.851 "num_base_bdevs": 3, 00:13:04.851 "num_base_bdevs_discovered": 1, 00:13:04.851 "num_base_bdevs_operational": 3, 00:13:04.851 "base_bdevs_list": [ 00:13:04.851 { 00:13:04.851 "name": "BaseBdev1", 00:13:04.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.851 "is_configured": false, 00:13:04.851 "data_offset": 0, 00:13:04.851 "data_size": 0 00:13:04.851 }, 00:13:04.851 { 00:13:04.851 "name": null, 00:13:04.851 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:04.851 "is_configured": false, 00:13:04.851 "data_offset": 0, 00:13:04.851 "data_size": 63488 00:13:04.851 }, 00:13:04.851 { 00:13:04.851 "name": "BaseBdev3", 00:13:04.851 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:04.851 "is_configured": true, 00:13:04.851 "data_offset": 2048, 00:13:04.851 "data_size": 63488 00:13:04.851 } 00:13:04.851 ] 00:13:04.851 }' 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.851 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 [2024-12-07 01:57:10.748404] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.420 BaseBdev1 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 [ 00:13:05.420 { 00:13:05.420 "name": "BaseBdev1", 00:13:05.420 "aliases": [ 00:13:05.420 "85807469-8b8d-4824-8d25-a67dd78989d0" 00:13:05.420 ], 00:13:05.420 "product_name": "Malloc disk", 00:13:05.420 "block_size": 512, 00:13:05.420 "num_blocks": 65536, 00:13:05.420 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:05.420 "assigned_rate_limits": { 00:13:05.420 "rw_ios_per_sec": 0, 00:13:05.420 "rw_mbytes_per_sec": 0, 00:13:05.420 "r_mbytes_per_sec": 0, 00:13:05.420 "w_mbytes_per_sec": 0 00:13:05.420 }, 00:13:05.420 "claimed": true, 00:13:05.420 "claim_type": "exclusive_write", 00:13:05.420 "zoned": false, 00:13:05.420 "supported_io_types": { 00:13:05.420 "read": true, 00:13:05.420 "write": true, 00:13:05.420 "unmap": true, 00:13:05.420 "flush": true, 00:13:05.420 "reset": true, 00:13:05.420 "nvme_admin": false, 00:13:05.420 "nvme_io": false, 00:13:05.420 "nvme_io_md": false, 00:13:05.420 "write_zeroes": true, 00:13:05.420 "zcopy": true, 00:13:05.420 "get_zone_info": false, 00:13:05.420 "zone_management": false, 00:13:05.420 "zone_append": false, 00:13:05.420 "compare": false, 00:13:05.420 "compare_and_write": false, 00:13:05.420 "abort": true, 00:13:05.420 "seek_hole": false, 00:13:05.420 "seek_data": false, 00:13:05.420 "copy": true, 00:13:05.420 "nvme_iov_md": false 00:13:05.420 }, 00:13:05.420 "memory_domains": [ 00:13:05.420 { 00:13:05.420 "dma_device_id": "system", 00:13:05.420 "dma_device_type": 1 00:13:05.420 }, 00:13:05.420 { 00:13:05.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.420 "dma_device_type": 2 00:13:05.420 } 00:13:05.420 ], 00:13:05.420 "driver_specific": {} 00:13:05.420 } 00:13:05.420 ] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.420 "name": "Existed_Raid", 00:13:05.420 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:05.420 "strip_size_kb": 64, 00:13:05.420 "state": "configuring", 00:13:05.420 "raid_level": "raid5f", 00:13:05.420 "superblock": true, 00:13:05.420 "num_base_bdevs": 3, 00:13:05.420 "num_base_bdevs_discovered": 2, 00:13:05.420 "num_base_bdevs_operational": 3, 00:13:05.420 "base_bdevs_list": [ 00:13:05.420 { 00:13:05.420 "name": "BaseBdev1", 00:13:05.420 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:05.420 "is_configured": true, 00:13:05.420 "data_offset": 2048, 00:13:05.420 "data_size": 63488 00:13:05.420 }, 00:13:05.420 { 00:13:05.420 "name": null, 00:13:05.420 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:05.420 "is_configured": false, 00:13:05.420 "data_offset": 0, 00:13:05.420 "data_size": 63488 00:13:05.420 }, 00:13:05.420 { 00:13:05.420 "name": "BaseBdev3", 00:13:05.420 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:05.420 "is_configured": true, 00:13:05.420 "data_offset": 2048, 00:13:05.420 "data_size": 63488 00:13:05.420 } 00:13:05.420 ] 00:13:05.420 }' 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.420 01:57:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.986 [2024-12-07 01:57:11.235603] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.986 "name": "Existed_Raid", 00:13:05.986 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:05.986 "strip_size_kb": 64, 00:13:05.986 "state": "configuring", 00:13:05.986 "raid_level": "raid5f", 00:13:05.986 "superblock": true, 00:13:05.986 "num_base_bdevs": 3, 00:13:05.986 "num_base_bdevs_discovered": 1, 00:13:05.986 "num_base_bdevs_operational": 3, 00:13:05.986 "base_bdevs_list": [ 00:13:05.986 { 00:13:05.986 "name": "BaseBdev1", 00:13:05.986 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:05.986 "is_configured": true, 00:13:05.986 "data_offset": 2048, 00:13:05.986 "data_size": 63488 00:13:05.986 }, 00:13:05.986 { 00:13:05.986 "name": null, 00:13:05.986 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:05.986 "is_configured": false, 00:13:05.986 "data_offset": 0, 00:13:05.986 "data_size": 63488 00:13:05.986 }, 00:13:05.986 { 00:13:05.986 "name": null, 00:13:05.986 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:05.986 "is_configured": false, 00:13:05.986 "data_offset": 0, 00:13:05.986 "data_size": 63488 00:13:05.986 } 00:13:05.986 ] 00:13:05.986 }' 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.986 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.244 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.244 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.244 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.503 [2024-12-07 01:57:11.738915] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.503 "name": "Existed_Raid", 00:13:06.503 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:06.503 "strip_size_kb": 64, 00:13:06.503 "state": "configuring", 00:13:06.503 "raid_level": "raid5f", 00:13:06.503 "superblock": true, 00:13:06.503 "num_base_bdevs": 3, 00:13:06.503 "num_base_bdevs_discovered": 2, 00:13:06.503 "num_base_bdevs_operational": 3, 00:13:06.503 "base_bdevs_list": [ 00:13:06.503 { 00:13:06.503 "name": "BaseBdev1", 00:13:06.503 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:06.503 "is_configured": true, 00:13:06.503 "data_offset": 2048, 00:13:06.503 "data_size": 63488 00:13:06.503 }, 00:13:06.503 { 00:13:06.503 "name": null, 00:13:06.503 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:06.503 "is_configured": false, 00:13:06.503 "data_offset": 0, 00:13:06.503 "data_size": 63488 00:13:06.503 }, 00:13:06.503 { 00:13:06.503 "name": "BaseBdev3", 00:13:06.503 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:06.503 "is_configured": true, 00:13:06.503 "data_offset": 2048, 00:13:06.503 "data_size": 63488 00:13:06.503 } 00:13:06.503 ] 00:13:06.503 }' 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.503 01:57:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.761 [2024-12-07 01:57:12.202162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.761 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.762 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.762 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.762 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.762 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.762 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.021 "name": "Existed_Raid", 00:13:07.021 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:07.021 "strip_size_kb": 64, 00:13:07.021 "state": "configuring", 00:13:07.021 "raid_level": "raid5f", 00:13:07.021 "superblock": true, 00:13:07.021 "num_base_bdevs": 3, 00:13:07.021 "num_base_bdevs_discovered": 1, 00:13:07.021 "num_base_bdevs_operational": 3, 00:13:07.021 "base_bdevs_list": [ 00:13:07.021 { 00:13:07.021 "name": null, 00:13:07.021 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:07.021 "is_configured": false, 00:13:07.021 "data_offset": 0, 00:13:07.021 "data_size": 63488 00:13:07.021 }, 00:13:07.021 { 00:13:07.021 "name": null, 00:13:07.021 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:07.021 "is_configured": false, 00:13:07.021 "data_offset": 0, 00:13:07.021 "data_size": 63488 00:13:07.021 }, 00:13:07.021 { 00:13:07.021 "name": "BaseBdev3", 00:13:07.021 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:07.021 "is_configured": true, 00:13:07.021 "data_offset": 2048, 00:13:07.021 "data_size": 63488 00:13:07.021 } 00:13:07.021 ] 00:13:07.021 }' 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.021 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.280 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:07.280 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.280 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.280 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.280 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.538 [2024-12-07 01:57:12.751577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.538 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.539 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.539 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.539 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.539 "name": "Existed_Raid", 00:13:07.539 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:07.539 "strip_size_kb": 64, 00:13:07.539 "state": "configuring", 00:13:07.539 "raid_level": "raid5f", 00:13:07.539 "superblock": true, 00:13:07.539 "num_base_bdevs": 3, 00:13:07.539 "num_base_bdevs_discovered": 2, 00:13:07.539 "num_base_bdevs_operational": 3, 00:13:07.539 "base_bdevs_list": [ 00:13:07.539 { 00:13:07.539 "name": null, 00:13:07.539 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:07.539 "is_configured": false, 00:13:07.539 "data_offset": 0, 00:13:07.539 "data_size": 63488 00:13:07.539 }, 00:13:07.539 { 00:13:07.539 "name": "BaseBdev2", 00:13:07.539 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:07.539 "is_configured": true, 00:13:07.539 "data_offset": 2048, 00:13:07.539 "data_size": 63488 00:13:07.539 }, 00:13:07.539 { 00:13:07.539 "name": "BaseBdev3", 00:13:07.539 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:07.539 "is_configured": true, 00:13:07.539 "data_offset": 2048, 00:13:07.539 "data_size": 63488 00:13:07.539 } 00:13:07.539 ] 00:13:07.539 }' 00:13:07.539 01:57:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.539 01:57:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.797 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85807469-8b8d-4824-8d25-a67dd78989d0 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.062 [2024-12-07 01:57:13.305856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:08.062 [2024-12-07 01:57:13.306061] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:08.062 [2024-12-07 01:57:13.306077] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:08.062 [2024-12-07 01:57:13.306313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:13:08.062 NewBaseBdev 00:13:08.062 [2024-12-07 01:57:13.306719] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:08.062 [2024-12-07 01:57:13.306820] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:08.062 [2024-12-07 01:57:13.306946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.062 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.062 [ 00:13:08.062 { 00:13:08.062 "name": "NewBaseBdev", 00:13:08.062 "aliases": [ 00:13:08.062 "85807469-8b8d-4824-8d25-a67dd78989d0" 00:13:08.062 ], 00:13:08.062 "product_name": "Malloc disk", 00:13:08.062 "block_size": 512, 00:13:08.062 "num_blocks": 65536, 00:13:08.062 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:08.062 "assigned_rate_limits": { 00:13:08.062 "rw_ios_per_sec": 0, 00:13:08.062 "rw_mbytes_per_sec": 0, 00:13:08.062 "r_mbytes_per_sec": 0, 00:13:08.062 "w_mbytes_per_sec": 0 00:13:08.062 }, 00:13:08.062 "claimed": true, 00:13:08.062 "claim_type": "exclusive_write", 00:13:08.062 "zoned": false, 00:13:08.062 "supported_io_types": { 00:13:08.062 "read": true, 00:13:08.062 "write": true, 00:13:08.062 "unmap": true, 00:13:08.062 "flush": true, 00:13:08.062 "reset": true, 00:13:08.062 "nvme_admin": false, 00:13:08.062 "nvme_io": false, 00:13:08.062 "nvme_io_md": false, 00:13:08.062 "write_zeroes": true, 00:13:08.062 "zcopy": true, 00:13:08.062 "get_zone_info": false, 00:13:08.062 "zone_management": false, 00:13:08.062 "zone_append": false, 00:13:08.062 "compare": false, 00:13:08.062 "compare_and_write": false, 00:13:08.062 "abort": true, 00:13:08.062 "seek_hole": false, 00:13:08.062 "seek_data": false, 00:13:08.062 "copy": true, 00:13:08.062 "nvme_iov_md": false 00:13:08.062 }, 00:13:08.062 "memory_domains": [ 00:13:08.062 { 00:13:08.062 "dma_device_id": "system", 00:13:08.062 "dma_device_type": 1 00:13:08.062 }, 00:13:08.062 { 00:13:08.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:08.063 "dma_device_type": 2 00:13:08.063 } 00:13:08.063 ], 00:13:08.063 "driver_specific": {} 00:13:08.063 } 00:13:08.063 ] 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.063 "name": "Existed_Raid", 00:13:08.063 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:08.063 "strip_size_kb": 64, 00:13:08.063 "state": "online", 00:13:08.063 "raid_level": "raid5f", 00:13:08.063 "superblock": true, 00:13:08.063 "num_base_bdevs": 3, 00:13:08.063 "num_base_bdevs_discovered": 3, 00:13:08.063 "num_base_bdevs_operational": 3, 00:13:08.063 "base_bdevs_list": [ 00:13:08.063 { 00:13:08.063 "name": "NewBaseBdev", 00:13:08.063 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:08.063 "is_configured": true, 00:13:08.063 "data_offset": 2048, 00:13:08.063 "data_size": 63488 00:13:08.063 }, 00:13:08.063 { 00:13:08.063 "name": "BaseBdev2", 00:13:08.063 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:08.063 "is_configured": true, 00:13:08.063 "data_offset": 2048, 00:13:08.063 "data_size": 63488 00:13:08.063 }, 00:13:08.063 { 00:13:08.063 "name": "BaseBdev3", 00:13:08.063 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:08.063 "is_configured": true, 00:13:08.063 "data_offset": 2048, 00:13:08.063 "data_size": 63488 00:13:08.063 } 00:13:08.063 ] 00:13:08.063 }' 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.063 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.334 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.592 [2024-12-07 01:57:13.797251] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:08.592 "name": "Existed_Raid", 00:13:08.592 "aliases": [ 00:13:08.592 "8beec2bf-13dc-4351-b842-febae144dd84" 00:13:08.592 ], 00:13:08.592 "product_name": "Raid Volume", 00:13:08.592 "block_size": 512, 00:13:08.592 "num_blocks": 126976, 00:13:08.592 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:08.592 "assigned_rate_limits": { 00:13:08.592 "rw_ios_per_sec": 0, 00:13:08.592 "rw_mbytes_per_sec": 0, 00:13:08.592 "r_mbytes_per_sec": 0, 00:13:08.592 "w_mbytes_per_sec": 0 00:13:08.592 }, 00:13:08.592 "claimed": false, 00:13:08.592 "zoned": false, 00:13:08.592 "supported_io_types": { 00:13:08.592 "read": true, 00:13:08.592 "write": true, 00:13:08.592 "unmap": false, 00:13:08.592 "flush": false, 00:13:08.592 "reset": true, 00:13:08.592 "nvme_admin": false, 00:13:08.592 "nvme_io": false, 00:13:08.592 "nvme_io_md": false, 00:13:08.592 "write_zeroes": true, 00:13:08.592 "zcopy": false, 00:13:08.592 "get_zone_info": false, 00:13:08.592 "zone_management": false, 00:13:08.592 "zone_append": false, 00:13:08.592 "compare": false, 00:13:08.592 "compare_and_write": false, 00:13:08.592 "abort": false, 00:13:08.592 "seek_hole": false, 00:13:08.592 "seek_data": false, 00:13:08.592 "copy": false, 00:13:08.592 "nvme_iov_md": false 00:13:08.592 }, 00:13:08.592 "driver_specific": { 00:13:08.592 "raid": { 00:13:08.592 "uuid": "8beec2bf-13dc-4351-b842-febae144dd84", 00:13:08.592 "strip_size_kb": 64, 00:13:08.592 "state": "online", 00:13:08.592 "raid_level": "raid5f", 00:13:08.592 "superblock": true, 00:13:08.592 "num_base_bdevs": 3, 00:13:08.592 "num_base_bdevs_discovered": 3, 00:13:08.592 "num_base_bdevs_operational": 3, 00:13:08.592 "base_bdevs_list": [ 00:13:08.592 { 00:13:08.592 "name": "NewBaseBdev", 00:13:08.592 "uuid": "85807469-8b8d-4824-8d25-a67dd78989d0", 00:13:08.592 "is_configured": true, 00:13:08.592 "data_offset": 2048, 00:13:08.592 "data_size": 63488 00:13:08.592 }, 00:13:08.592 { 00:13:08.592 "name": "BaseBdev2", 00:13:08.592 "uuid": "95ced154-8eeb-45e9-a28d-98f5274a9c10", 00:13:08.592 "is_configured": true, 00:13:08.592 "data_offset": 2048, 00:13:08.592 "data_size": 63488 00:13:08.592 }, 00:13:08.592 { 00:13:08.592 "name": "BaseBdev3", 00:13:08.592 "uuid": "2c688688-4245-4270-b2f8-b145b859cbb8", 00:13:08.592 "is_configured": true, 00:13:08.592 "data_offset": 2048, 00:13:08.592 "data_size": 63488 00:13:08.592 } 00:13:08.592 ] 00:13:08.592 } 00:13:08.592 } 00:13:08.592 }' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:08.592 BaseBdev2 00:13:08.592 BaseBdev3' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.592 01:57:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.592 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.850 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.851 [2024-12-07 01:57:14.076557] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.851 [2024-12-07 01:57:14.076628] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:08.851 [2024-12-07 01:57:14.076729] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:08.851 [2024-12-07 01:57:14.077009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:08.851 [2024-12-07 01:57:14.077063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90755 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 90755 ']' 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 90755 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90755 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90755' 00:13:08.851 killing process with pid 90755 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 90755 00:13:08.851 [2024-12-07 01:57:14.117054] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:08.851 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 90755 00:13:08.851 [2024-12-07 01:57:14.147079] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.110 01:57:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:09.110 ************************************ 00:13:09.110 END TEST raid5f_state_function_test_sb 00:13:09.110 ************************************ 00:13:09.110 00:13:09.110 real 0m8.981s 00:13:09.110 user 0m15.384s 00:13:09.110 sys 0m1.806s 00:13:09.110 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.110 01:57:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.110 01:57:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:13:09.110 01:57:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:09.110 01:57:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.110 01:57:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.110 ************************************ 00:13:09.110 START TEST raid5f_superblock_test 00:13:09.110 ************************************ 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91359 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91359 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 91359 ']' 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:09.110 01:57:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.110 [2024-12-07 01:57:14.545524] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:09.110 [2024-12-07 01:57:14.545648] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91359 ] 00:13:09.369 [2024-12-07 01:57:14.689476] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.369 [2024-12-07 01:57:14.732636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.369 [2024-12-07 01:57:14.773878] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.369 [2024-12-07 01:57:14.773910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.935 malloc1 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.935 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.935 [2024-12-07 01:57:15.387500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:09.935 [2024-12-07 01:57:15.387606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.935 [2024-12-07 01:57:15.387645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:09.935 [2024-12-07 01:57:15.387694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.935 [2024-12-07 01:57:15.389820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.935 [2024-12-07 01:57:15.389889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:10.193 pt1 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.193 malloc2 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.193 [2024-12-07 01:57:15.432693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:10.193 [2024-12-07 01:57:15.432887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.193 [2024-12-07 01:57:15.432969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:10.193 [2024-12-07 01:57:15.433053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.193 [2024-12-07 01:57:15.437844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.193 [2024-12-07 01:57:15.437994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:10.193 pt2 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.193 malloc3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.193 [2024-12-07 01:57:15.467386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:10.193 [2024-12-07 01:57:15.467474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.193 [2024-12-07 01:57:15.467507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:10.193 [2024-12-07 01:57:15.467536] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.193 [2024-12-07 01:57:15.469594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.193 [2024-12-07 01:57:15.469689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:10.193 pt3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.193 [2024-12-07 01:57:15.479426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:10.193 [2024-12-07 01:57:15.481263] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:10.193 [2024-12-07 01:57:15.481355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:10.193 [2024-12-07 01:57:15.481540] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:10.193 [2024-12-07 01:57:15.481586] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:10.193 [2024-12-07 01:57:15.481865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:13:10.193 [2024-12-07 01:57:15.482309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:10.193 [2024-12-07 01:57:15.482360] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:10.193 [2024-12-07 01:57:15.482520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.193 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.194 "name": "raid_bdev1", 00:13:10.194 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:10.194 "strip_size_kb": 64, 00:13:10.194 "state": "online", 00:13:10.194 "raid_level": "raid5f", 00:13:10.194 "superblock": true, 00:13:10.194 "num_base_bdevs": 3, 00:13:10.194 "num_base_bdevs_discovered": 3, 00:13:10.194 "num_base_bdevs_operational": 3, 00:13:10.194 "base_bdevs_list": [ 00:13:10.194 { 00:13:10.194 "name": "pt1", 00:13:10.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.194 "is_configured": true, 00:13:10.194 "data_offset": 2048, 00:13:10.194 "data_size": 63488 00:13:10.194 }, 00:13:10.194 { 00:13:10.194 "name": "pt2", 00:13:10.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.194 "is_configured": true, 00:13:10.194 "data_offset": 2048, 00:13:10.194 "data_size": 63488 00:13:10.194 }, 00:13:10.194 { 00:13:10.194 "name": "pt3", 00:13:10.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.194 "is_configured": true, 00:13:10.194 "data_offset": 2048, 00:13:10.194 "data_size": 63488 00:13:10.194 } 00:13:10.194 ] 00:13:10.194 }' 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.194 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.452 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.452 [2024-12-07 01:57:15.907418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.710 01:57:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.710 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:10.710 "name": "raid_bdev1", 00:13:10.710 "aliases": [ 00:13:10.710 "d2468896-3826-476c-8bb2-8bdbe2669259" 00:13:10.710 ], 00:13:10.710 "product_name": "Raid Volume", 00:13:10.710 "block_size": 512, 00:13:10.710 "num_blocks": 126976, 00:13:10.710 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:10.710 "assigned_rate_limits": { 00:13:10.710 "rw_ios_per_sec": 0, 00:13:10.710 "rw_mbytes_per_sec": 0, 00:13:10.710 "r_mbytes_per_sec": 0, 00:13:10.710 "w_mbytes_per_sec": 0 00:13:10.710 }, 00:13:10.710 "claimed": false, 00:13:10.710 "zoned": false, 00:13:10.710 "supported_io_types": { 00:13:10.710 "read": true, 00:13:10.710 "write": true, 00:13:10.710 "unmap": false, 00:13:10.710 "flush": false, 00:13:10.710 "reset": true, 00:13:10.710 "nvme_admin": false, 00:13:10.710 "nvme_io": false, 00:13:10.710 "nvme_io_md": false, 00:13:10.710 "write_zeroes": true, 00:13:10.710 "zcopy": false, 00:13:10.710 "get_zone_info": false, 00:13:10.710 "zone_management": false, 00:13:10.710 "zone_append": false, 00:13:10.710 "compare": false, 00:13:10.710 "compare_and_write": false, 00:13:10.710 "abort": false, 00:13:10.710 "seek_hole": false, 00:13:10.710 "seek_data": false, 00:13:10.710 "copy": false, 00:13:10.710 "nvme_iov_md": false 00:13:10.710 }, 00:13:10.710 "driver_specific": { 00:13:10.710 "raid": { 00:13:10.710 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:10.710 "strip_size_kb": 64, 00:13:10.710 "state": "online", 00:13:10.710 "raid_level": "raid5f", 00:13:10.710 "superblock": true, 00:13:10.710 "num_base_bdevs": 3, 00:13:10.710 "num_base_bdevs_discovered": 3, 00:13:10.710 "num_base_bdevs_operational": 3, 00:13:10.710 "base_bdevs_list": [ 00:13:10.710 { 00:13:10.710 "name": "pt1", 00:13:10.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:10.710 "is_configured": true, 00:13:10.710 "data_offset": 2048, 00:13:10.710 "data_size": 63488 00:13:10.710 }, 00:13:10.710 { 00:13:10.710 "name": "pt2", 00:13:10.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:10.710 "is_configured": true, 00:13:10.710 "data_offset": 2048, 00:13:10.710 "data_size": 63488 00:13:10.710 }, 00:13:10.710 { 00:13:10.710 "name": "pt3", 00:13:10.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:10.710 "is_configured": true, 00:13:10.710 "data_offset": 2048, 00:13:10.710 "data_size": 63488 00:13:10.710 } 00:13:10.710 ] 00:13:10.710 } 00:13:10.710 } 00:13:10.710 }' 00:13:10.710 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:10.710 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:10.710 pt2 00:13:10.710 pt3' 00:13:10.710 01:57:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.710 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:10.969 [2024-12-07 01:57:16.210960] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d2468896-3826-476c-8bb2-8bdbe2669259 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d2468896-3826-476c-8bb2-8bdbe2669259 ']' 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 [2024-12-07 01:57:16.258719] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.969 [2024-12-07 01:57:16.258742] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:10.969 [2024-12-07 01:57:16.258825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:10.969 [2024-12-07 01:57:16.258899] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:10.969 [2024-12-07 01:57:16.258911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.969 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.969 [2024-12-07 01:57:16.406482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:10.969 [2024-12-07 01:57:16.408384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:10.970 [2024-12-07 01:57:16.408468] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:10.970 [2024-12-07 01:57:16.408534] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:10.970 [2024-12-07 01:57:16.408613] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:10.970 [2024-12-07 01:57:16.408679] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:10.970 [2024-12-07 01:57:16.408750] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:10.970 [2024-12-07 01:57:16.408782] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:10.970 request: 00:13:10.970 { 00:13:10.970 "name": "raid_bdev1", 00:13:10.970 "raid_level": "raid5f", 00:13:10.970 "base_bdevs": [ 00:13:10.970 "malloc1", 00:13:10.970 "malloc2", 00:13:10.970 "malloc3" 00:13:10.970 ], 00:13:10.970 "strip_size_kb": 64, 00:13:10.970 "superblock": false, 00:13:10.970 "method": "bdev_raid_create", 00:13:10.970 "req_id": 1 00:13:10.970 } 00:13:10.970 Got JSON-RPC error response 00:13:10.970 response: 00:13:10.970 { 00:13:10.970 "code": -17, 00:13:10.970 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:10.970 } 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.970 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.227 [2024-12-07 01:57:16.470339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:11.227 [2024-12-07 01:57:16.470422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.227 [2024-12-07 01:57:16.470440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:11.227 [2024-12-07 01:57:16.470451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.227 [2024-12-07 01:57:16.472558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.227 [2024-12-07 01:57:16.472605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:11.227 [2024-12-07 01:57:16.472696] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:11.227 [2024-12-07 01:57:16.472737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:11.227 pt1 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.227 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.228 "name": "raid_bdev1", 00:13:11.228 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:11.228 "strip_size_kb": 64, 00:13:11.228 "state": "configuring", 00:13:11.228 "raid_level": "raid5f", 00:13:11.228 "superblock": true, 00:13:11.228 "num_base_bdevs": 3, 00:13:11.228 "num_base_bdevs_discovered": 1, 00:13:11.228 "num_base_bdevs_operational": 3, 00:13:11.228 "base_bdevs_list": [ 00:13:11.228 { 00:13:11.228 "name": "pt1", 00:13:11.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.228 "is_configured": true, 00:13:11.228 "data_offset": 2048, 00:13:11.228 "data_size": 63488 00:13:11.228 }, 00:13:11.228 { 00:13:11.228 "name": null, 00:13:11.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.228 "is_configured": false, 00:13:11.228 "data_offset": 2048, 00:13:11.228 "data_size": 63488 00:13:11.228 }, 00:13:11.228 { 00:13:11.228 "name": null, 00:13:11.228 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.228 "is_configured": false, 00:13:11.228 "data_offset": 2048, 00:13:11.228 "data_size": 63488 00:13:11.228 } 00:13:11.228 ] 00:13:11.228 }' 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.228 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.486 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:11.486 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:11.486 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.486 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.486 [2024-12-07 01:57:16.941556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:11.486 [2024-12-07 01:57:16.941669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.486 [2024-12-07 01:57:16.941707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:11.486 [2024-12-07 01:57:16.941739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.486 [2024-12-07 01:57:16.942171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.486 [2024-12-07 01:57:16.942250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:11.486 [2024-12-07 01:57:16.942365] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:11.486 [2024-12-07 01:57:16.942422] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:11.486 pt2 00:13:11.744 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.744 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:11.744 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.744 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.745 [2024-12-07 01:57:16.953542] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.745 01:57:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.745 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.745 "name": "raid_bdev1", 00:13:11.745 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:11.745 "strip_size_kb": 64, 00:13:11.745 "state": "configuring", 00:13:11.745 "raid_level": "raid5f", 00:13:11.745 "superblock": true, 00:13:11.745 "num_base_bdevs": 3, 00:13:11.745 "num_base_bdevs_discovered": 1, 00:13:11.745 "num_base_bdevs_operational": 3, 00:13:11.745 "base_bdevs_list": [ 00:13:11.745 { 00:13:11.745 "name": "pt1", 00:13:11.745 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:11.745 "is_configured": true, 00:13:11.745 "data_offset": 2048, 00:13:11.745 "data_size": 63488 00:13:11.745 }, 00:13:11.745 { 00:13:11.745 "name": null, 00:13:11.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:11.745 "is_configured": false, 00:13:11.745 "data_offset": 0, 00:13:11.745 "data_size": 63488 00:13:11.745 }, 00:13:11.745 { 00:13:11.745 "name": null, 00:13:11.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:11.745 "is_configured": false, 00:13:11.745 "data_offset": 2048, 00:13:11.745 "data_size": 63488 00:13:11.745 } 00:13:11.745 ] 00:13:11.745 }' 00:13:11.745 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.745 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.004 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:12.004 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.004 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:12.004 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.004 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.005 [2024-12-07 01:57:17.376805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:12.005 [2024-12-07 01:57:17.376861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.005 [2024-12-07 01:57:17.376881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:12.005 [2024-12-07 01:57:17.376890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.005 [2024-12-07 01:57:17.377253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.005 [2024-12-07 01:57:17.377269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:12.005 [2024-12-07 01:57:17.377335] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:12.005 [2024-12-07 01:57:17.377355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:12.005 pt2 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.005 [2024-12-07 01:57:17.388776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:12.005 [2024-12-07 01:57:17.388818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:12.005 [2024-12-07 01:57:17.388837] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:12.005 [2024-12-07 01:57:17.388845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:12.005 [2024-12-07 01:57:17.389146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:12.005 [2024-12-07 01:57:17.389161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:12.005 [2024-12-07 01:57:17.389213] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:12.005 [2024-12-07 01:57:17.389244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:12.005 [2024-12-07 01:57:17.389338] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:12.005 [2024-12-07 01:57:17.389346] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:12.005 [2024-12-07 01:57:17.389552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:12.005 [2024-12-07 01:57:17.389934] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:12.005 [2024-12-07 01:57:17.389953] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:12.005 [2024-12-07 01:57:17.390065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.005 pt3 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.005 "name": "raid_bdev1", 00:13:12.005 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:12.005 "strip_size_kb": 64, 00:13:12.005 "state": "online", 00:13:12.005 "raid_level": "raid5f", 00:13:12.005 "superblock": true, 00:13:12.005 "num_base_bdevs": 3, 00:13:12.005 "num_base_bdevs_discovered": 3, 00:13:12.005 "num_base_bdevs_operational": 3, 00:13:12.005 "base_bdevs_list": [ 00:13:12.005 { 00:13:12.005 "name": "pt1", 00:13:12.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.005 "is_configured": true, 00:13:12.005 "data_offset": 2048, 00:13:12.005 "data_size": 63488 00:13:12.005 }, 00:13:12.005 { 00:13:12.005 "name": "pt2", 00:13:12.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.005 "is_configured": true, 00:13:12.005 "data_offset": 2048, 00:13:12.005 "data_size": 63488 00:13:12.005 }, 00:13:12.005 { 00:13:12.005 "name": "pt3", 00:13:12.005 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.005 "is_configured": true, 00:13:12.005 "data_offset": 2048, 00:13:12.005 "data_size": 63488 00:13:12.005 } 00:13:12.005 ] 00:13:12.005 }' 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.005 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.573 [2024-12-07 01:57:17.840235] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:12.573 "name": "raid_bdev1", 00:13:12.573 "aliases": [ 00:13:12.573 "d2468896-3826-476c-8bb2-8bdbe2669259" 00:13:12.573 ], 00:13:12.573 "product_name": "Raid Volume", 00:13:12.573 "block_size": 512, 00:13:12.573 "num_blocks": 126976, 00:13:12.573 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:12.573 "assigned_rate_limits": { 00:13:12.573 "rw_ios_per_sec": 0, 00:13:12.573 "rw_mbytes_per_sec": 0, 00:13:12.573 "r_mbytes_per_sec": 0, 00:13:12.573 "w_mbytes_per_sec": 0 00:13:12.573 }, 00:13:12.573 "claimed": false, 00:13:12.573 "zoned": false, 00:13:12.573 "supported_io_types": { 00:13:12.573 "read": true, 00:13:12.573 "write": true, 00:13:12.573 "unmap": false, 00:13:12.573 "flush": false, 00:13:12.573 "reset": true, 00:13:12.573 "nvme_admin": false, 00:13:12.573 "nvme_io": false, 00:13:12.573 "nvme_io_md": false, 00:13:12.573 "write_zeroes": true, 00:13:12.573 "zcopy": false, 00:13:12.573 "get_zone_info": false, 00:13:12.573 "zone_management": false, 00:13:12.573 "zone_append": false, 00:13:12.573 "compare": false, 00:13:12.573 "compare_and_write": false, 00:13:12.573 "abort": false, 00:13:12.573 "seek_hole": false, 00:13:12.573 "seek_data": false, 00:13:12.573 "copy": false, 00:13:12.573 "nvme_iov_md": false 00:13:12.573 }, 00:13:12.573 "driver_specific": { 00:13:12.573 "raid": { 00:13:12.573 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:12.573 "strip_size_kb": 64, 00:13:12.573 "state": "online", 00:13:12.573 "raid_level": "raid5f", 00:13:12.573 "superblock": true, 00:13:12.573 "num_base_bdevs": 3, 00:13:12.573 "num_base_bdevs_discovered": 3, 00:13:12.573 "num_base_bdevs_operational": 3, 00:13:12.573 "base_bdevs_list": [ 00:13:12.573 { 00:13:12.573 "name": "pt1", 00:13:12.573 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:12.573 "is_configured": true, 00:13:12.573 "data_offset": 2048, 00:13:12.573 "data_size": 63488 00:13:12.573 }, 00:13:12.573 { 00:13:12.573 "name": "pt2", 00:13:12.573 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.573 "is_configured": true, 00:13:12.573 "data_offset": 2048, 00:13:12.573 "data_size": 63488 00:13:12.573 }, 00:13:12.573 { 00:13:12.573 "name": "pt3", 00:13:12.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.573 "is_configured": true, 00:13:12.573 "data_offset": 2048, 00:13:12.573 "data_size": 63488 00:13:12.573 } 00:13:12.573 ] 00:13:12.573 } 00:13:12.573 } 00:13:12.573 }' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:12.573 pt2 00:13:12.573 pt3' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.573 01:57:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.573 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 [2024-12-07 01:57:18.111831] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d2468896-3826-476c-8bb2-8bdbe2669259 '!=' d2468896-3826-476c-8bb2-8bdbe2669259 ']' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 [2024-12-07 01:57:18.159616] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.833 "name": "raid_bdev1", 00:13:12.833 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:12.833 "strip_size_kb": 64, 00:13:12.833 "state": "online", 00:13:12.833 "raid_level": "raid5f", 00:13:12.833 "superblock": true, 00:13:12.833 "num_base_bdevs": 3, 00:13:12.833 "num_base_bdevs_discovered": 2, 00:13:12.833 "num_base_bdevs_operational": 2, 00:13:12.833 "base_bdevs_list": [ 00:13:12.833 { 00:13:12.833 "name": null, 00:13:12.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.833 "is_configured": false, 00:13:12.833 "data_offset": 0, 00:13:12.833 "data_size": 63488 00:13:12.833 }, 00:13:12.833 { 00:13:12.833 "name": "pt2", 00:13:12.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:12.833 "is_configured": true, 00:13:12.833 "data_offset": 2048, 00:13:12.833 "data_size": 63488 00:13:12.833 }, 00:13:12.833 { 00:13:12.833 "name": "pt3", 00:13:12.833 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:12.833 "is_configured": true, 00:13:12.833 "data_offset": 2048, 00:13:12.833 "data_size": 63488 00:13:12.833 } 00:13:12.833 ] 00:13:12.833 }' 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.833 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.402 [2024-12-07 01:57:18.566953] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:13.402 [2024-12-07 01:57:18.567025] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.402 [2024-12-07 01:57:18.567124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.402 [2024-12-07 01:57:18.567183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.402 [2024-12-07 01:57:18.567192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:13.402 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.403 [2024-12-07 01:57:18.650800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:13.403 [2024-12-07 01:57:18.650851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.403 [2024-12-07 01:57:18.650870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:13.403 [2024-12-07 01:57:18.650879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.403 [2024-12-07 01:57:18.652959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.403 [2024-12-07 01:57:18.652993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:13.403 [2024-12-07 01:57:18.653058] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:13.403 [2024-12-07 01:57:18.653100] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:13.403 pt2 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.403 "name": "raid_bdev1", 00:13:13.403 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:13.403 "strip_size_kb": 64, 00:13:13.403 "state": "configuring", 00:13:13.403 "raid_level": "raid5f", 00:13:13.403 "superblock": true, 00:13:13.403 "num_base_bdevs": 3, 00:13:13.403 "num_base_bdevs_discovered": 1, 00:13:13.403 "num_base_bdevs_operational": 2, 00:13:13.403 "base_bdevs_list": [ 00:13:13.403 { 00:13:13.403 "name": null, 00:13:13.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.403 "is_configured": false, 00:13:13.403 "data_offset": 2048, 00:13:13.403 "data_size": 63488 00:13:13.403 }, 00:13:13.403 { 00:13:13.403 "name": "pt2", 00:13:13.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.403 "is_configured": true, 00:13:13.403 "data_offset": 2048, 00:13:13.403 "data_size": 63488 00:13:13.403 }, 00:13:13.403 { 00:13:13.403 "name": null, 00:13:13.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.403 "is_configured": false, 00:13:13.403 "data_offset": 2048, 00:13:13.403 "data_size": 63488 00:13:13.403 } 00:13:13.403 ] 00:13:13.403 }' 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.403 01:57:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.662 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:13.662 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:13.662 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:13.662 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:13.662 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.662 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.662 [2024-12-07 01:57:19.054191] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:13.662 [2024-12-07 01:57:19.054287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:13.662 [2024-12-07 01:57:19.054324] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:13.663 [2024-12-07 01:57:19.054351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:13.663 [2024-12-07 01:57:19.054768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:13.663 [2024-12-07 01:57:19.054823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:13.663 [2024-12-07 01:57:19.054925] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:13.663 [2024-12-07 01:57:19.054971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:13.663 [2024-12-07 01:57:19.055071] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:13.663 [2024-12-07 01:57:19.055105] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:13.663 [2024-12-07 01:57:19.055380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:13.663 [2024-12-07 01:57:19.055890] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:13.663 [2024-12-07 01:57:19.055944] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:13.663 [2024-12-07 01:57:19.056225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.663 pt3 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.663 "name": "raid_bdev1", 00:13:13.663 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:13.663 "strip_size_kb": 64, 00:13:13.663 "state": "online", 00:13:13.663 "raid_level": "raid5f", 00:13:13.663 "superblock": true, 00:13:13.663 "num_base_bdevs": 3, 00:13:13.663 "num_base_bdevs_discovered": 2, 00:13:13.663 "num_base_bdevs_operational": 2, 00:13:13.663 "base_bdevs_list": [ 00:13:13.663 { 00:13:13.663 "name": null, 00:13:13.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.663 "is_configured": false, 00:13:13.663 "data_offset": 2048, 00:13:13.663 "data_size": 63488 00:13:13.663 }, 00:13:13.663 { 00:13:13.663 "name": "pt2", 00:13:13.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:13.663 "is_configured": true, 00:13:13.663 "data_offset": 2048, 00:13:13.663 "data_size": 63488 00:13:13.663 }, 00:13:13.663 { 00:13:13.663 "name": "pt3", 00:13:13.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:13.663 "is_configured": true, 00:13:13.663 "data_offset": 2048, 00:13:13.663 "data_size": 63488 00:13:13.663 } 00:13:13.663 ] 00:13:13.663 }' 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.663 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 [2024-12-07 01:57:19.505457] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.229 [2024-12-07 01:57:19.505484] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:14.229 [2024-12-07 01:57:19.505556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:14.229 [2024-12-07 01:57:19.505614] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:14.229 [2024-12-07 01:57:19.505625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.229 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.229 [2024-12-07 01:57:19.569319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:14.229 [2024-12-07 01:57:19.569374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.229 [2024-12-07 01:57:19.569408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:14.229 [2024-12-07 01:57:19.569419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.229 [2024-12-07 01:57:19.571652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.229 [2024-12-07 01:57:19.571700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:14.229 [2024-12-07 01:57:19.571770] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:14.229 [2024-12-07 01:57:19.571814] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:14.229 [2024-12-07 01:57:19.571921] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:14.229 [2024-12-07 01:57:19.571948] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:14.229 [2024-12-07 01:57:19.571966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:14.230 [2024-12-07 01:57:19.572027] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:14.230 pt1 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.230 "name": "raid_bdev1", 00:13:14.230 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:14.230 "strip_size_kb": 64, 00:13:14.230 "state": "configuring", 00:13:14.230 "raid_level": "raid5f", 00:13:14.230 "superblock": true, 00:13:14.230 "num_base_bdevs": 3, 00:13:14.230 "num_base_bdevs_discovered": 1, 00:13:14.230 "num_base_bdevs_operational": 2, 00:13:14.230 "base_bdevs_list": [ 00:13:14.230 { 00:13:14.230 "name": null, 00:13:14.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.230 "is_configured": false, 00:13:14.230 "data_offset": 2048, 00:13:14.230 "data_size": 63488 00:13:14.230 }, 00:13:14.230 { 00:13:14.230 "name": "pt2", 00:13:14.230 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.230 "is_configured": true, 00:13:14.230 "data_offset": 2048, 00:13:14.230 "data_size": 63488 00:13:14.230 }, 00:13:14.230 { 00:13:14.230 "name": null, 00:13:14.230 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.230 "is_configured": false, 00:13:14.230 "data_offset": 2048, 00:13:14.230 "data_size": 63488 00:13:14.230 } 00:13:14.230 ] 00:13:14.230 }' 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.230 01:57:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.798 [2024-12-07 01:57:20.060547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:14.798 [2024-12-07 01:57:20.060673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:14.798 [2024-12-07 01:57:20.060740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:14.798 [2024-12-07 01:57:20.060785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:14.798 [2024-12-07 01:57:20.061244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:14.798 [2024-12-07 01:57:20.061312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:14.798 [2024-12-07 01:57:20.061417] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:14.798 [2024-12-07 01:57:20.061485] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:14.798 [2024-12-07 01:57:20.061617] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:14.798 [2024-12-07 01:57:20.061671] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:14.798 [2024-12-07 01:57:20.061955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:14.798 [2024-12-07 01:57:20.062463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:14.798 [2024-12-07 01:57:20.062510] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:14.798 [2024-12-07 01:57:20.062712] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.798 pt3 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.798 "name": "raid_bdev1", 00:13:14.798 "uuid": "d2468896-3826-476c-8bb2-8bdbe2669259", 00:13:14.798 "strip_size_kb": 64, 00:13:14.798 "state": "online", 00:13:14.798 "raid_level": "raid5f", 00:13:14.798 "superblock": true, 00:13:14.798 "num_base_bdevs": 3, 00:13:14.798 "num_base_bdevs_discovered": 2, 00:13:14.798 "num_base_bdevs_operational": 2, 00:13:14.798 "base_bdevs_list": [ 00:13:14.798 { 00:13:14.798 "name": null, 00:13:14.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.798 "is_configured": false, 00:13:14.798 "data_offset": 2048, 00:13:14.798 "data_size": 63488 00:13:14.798 }, 00:13:14.798 { 00:13:14.798 "name": "pt2", 00:13:14.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:14.798 "is_configured": true, 00:13:14.798 "data_offset": 2048, 00:13:14.798 "data_size": 63488 00:13:14.798 }, 00:13:14.798 { 00:13:14.798 "name": "pt3", 00:13:14.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:14.798 "is_configured": true, 00:13:14.798 "data_offset": 2048, 00:13:14.798 "data_size": 63488 00:13:14.798 } 00:13:14.798 ] 00:13:14.798 }' 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.798 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.365 [2024-12-07 01:57:20.579984] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d2468896-3826-476c-8bb2-8bdbe2669259 '!=' d2468896-3826-476c-8bb2-8bdbe2669259 ']' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91359 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 91359 ']' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 91359 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91359 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.365 killing process with pid 91359 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91359' 00:13:15.365 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 91359 00:13:15.365 [2024-12-07 01:57:20.658909] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.365 [2024-12-07 01:57:20.659003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.365 [2024-12-07 01:57:20.659073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.365 [2024-12-07 01:57:20.659082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, sta 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 91359 00:13:15.365 te offline 00:13:15.365 [2024-12-07 01:57:20.691733] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.625 01:57:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:15.625 00:13:15.625 real 0m6.457s 00:13:15.625 user 0m10.772s 00:13:15.625 sys 0m1.388s 00:13:15.625 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:15.625 01:57:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.625 ************************************ 00:13:15.625 END TEST raid5f_superblock_test 00:13:15.625 ************************************ 00:13:15.625 01:57:20 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:15.625 01:57:20 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:15.625 01:57:20 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:15.625 01:57:20 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:15.625 01:57:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:15.625 ************************************ 00:13:15.625 START TEST raid5f_rebuild_test 00:13:15.625 ************************************ 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91792 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91792 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 91792 ']' 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:15.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:15.625 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.884 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:15.884 Zero copy mechanism will not be used. 00:13:15.884 [2024-12-07 01:57:21.096187] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:15.884 [2024-12-07 01:57:21.096316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91792 ] 00:13:15.884 [2024-12-07 01:57:21.239050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.884 [2024-12-07 01:57:21.282735] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.884 [2024-12-07 01:57:21.323301] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.884 [2024-12-07 01:57:21.323347] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.450 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:16.450 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:16.450 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.450 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:16.450 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.450 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.709 BaseBdev1_malloc 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.709 [2024-12-07 01:57:21.933479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:16.709 [2024-12-07 01:57:21.933542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.709 [2024-12-07 01:57:21.933571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:16.709 [2024-12-07 01:57:21.933607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.709 [2024-12-07 01:57:21.935696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.709 [2024-12-07 01:57:21.935730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:16.709 BaseBdev1 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.709 BaseBdev2_malloc 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.709 [2024-12-07 01:57:21.971439] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:16.709 [2024-12-07 01:57:21.971527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.709 [2024-12-07 01:57:21.971568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:16.709 [2024-12-07 01:57:21.971587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.709 [2024-12-07 01:57:21.976030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.709 [2024-12-07 01:57:21.976095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:16.709 BaseBdev2 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.709 BaseBdev3_malloc 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.709 01:57:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:16.710 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.710 01:57:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.710 [2024-12-07 01:57:22.001657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:16.710 [2024-12-07 01:57:22.001733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.710 [2024-12-07 01:57:22.001759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:16.710 [2024-12-07 01:57:22.001767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.710 [2024-12-07 01:57:22.003751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.710 [2024-12-07 01:57:22.003782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:16.710 BaseBdev3 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.710 spare_malloc 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.710 spare_delay 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.710 [2024-12-07 01:57:22.041998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:16.710 [2024-12-07 01:57:22.042047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:16.710 [2024-12-07 01:57:22.042074] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:16.710 [2024-12-07 01:57:22.042083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:16.710 [2024-12-07 01:57:22.044178] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:16.710 [2024-12-07 01:57:22.044212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:16.710 spare 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.710 [2024-12-07 01:57:22.054035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.710 [2024-12-07 01:57:22.055882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.710 [2024-12-07 01:57:22.055969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.710 [2024-12-07 01:57:22.056042] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:16.710 [2024-12-07 01:57:22.056072] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:16.710 [2024-12-07 01:57:22.056321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:16.710 [2024-12-07 01:57:22.056766] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:16.710 [2024-12-07 01:57:22.056793] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:16.710 [2024-12-07 01:57:22.056918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.710 "name": "raid_bdev1", 00:13:16.710 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:16.710 "strip_size_kb": 64, 00:13:16.710 "state": "online", 00:13:16.710 "raid_level": "raid5f", 00:13:16.710 "superblock": false, 00:13:16.710 "num_base_bdevs": 3, 00:13:16.710 "num_base_bdevs_discovered": 3, 00:13:16.710 "num_base_bdevs_operational": 3, 00:13:16.710 "base_bdevs_list": [ 00:13:16.710 { 00:13:16.710 "name": "BaseBdev1", 00:13:16.710 "uuid": "a6712dcf-be1c-55cc-81c6-7a61b06c881a", 00:13:16.710 "is_configured": true, 00:13:16.710 "data_offset": 0, 00:13:16.710 "data_size": 65536 00:13:16.710 }, 00:13:16.710 { 00:13:16.710 "name": "BaseBdev2", 00:13:16.710 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:16.710 "is_configured": true, 00:13:16.710 "data_offset": 0, 00:13:16.710 "data_size": 65536 00:13:16.710 }, 00:13:16.710 { 00:13:16.710 "name": "BaseBdev3", 00:13:16.710 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:16.710 "is_configured": true, 00:13:16.710 "data_offset": 0, 00:13:16.710 "data_size": 65536 00:13:16.710 } 00:13:16.710 ] 00:13:16.710 }' 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.710 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 [2024-12-07 01:57:22.525702] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.276 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:17.534 [2024-12-07 01:57:22.793095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:17.534 /dev/nbd0 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.534 1+0 records in 00:13:17.534 1+0 records out 00:13:17.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361231 s, 11.3 MB/s 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:17.534 01:57:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:17.791 512+0 records in 00:13:17.791 512+0 records out 00:13:17.791 67108864 bytes (67 MB, 64 MiB) copied, 0.281363 s, 239 MB/s 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.791 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.050 [2024-12-07 01:57:23.329457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.050 [2024-12-07 01:57:23.366782] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.050 "name": "raid_bdev1", 00:13:18.050 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:18.050 "strip_size_kb": 64, 00:13:18.050 "state": "online", 00:13:18.050 "raid_level": "raid5f", 00:13:18.050 "superblock": false, 00:13:18.050 "num_base_bdevs": 3, 00:13:18.050 "num_base_bdevs_discovered": 2, 00:13:18.050 "num_base_bdevs_operational": 2, 00:13:18.050 "base_bdevs_list": [ 00:13:18.050 { 00:13:18.050 "name": null, 00:13:18.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.050 "is_configured": false, 00:13:18.050 "data_offset": 0, 00:13:18.050 "data_size": 65536 00:13:18.050 }, 00:13:18.050 { 00:13:18.050 "name": "BaseBdev2", 00:13:18.050 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:18.050 "is_configured": true, 00:13:18.050 "data_offset": 0, 00:13:18.050 "data_size": 65536 00:13:18.050 }, 00:13:18.050 { 00:13:18.050 "name": "BaseBdev3", 00:13:18.050 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:18.050 "is_configured": true, 00:13:18.050 "data_offset": 0, 00:13:18.050 "data_size": 65536 00:13:18.050 } 00:13:18.050 ] 00:13:18.050 }' 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.050 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.308 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.308 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.308 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.308 [2024-12-07 01:57:23.766122] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.567 [2024-12-07 01:57:23.770263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:13:18.567 01:57:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.567 01:57:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:18.567 [2024-12-07 01:57:23.772537] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.500 "name": "raid_bdev1", 00:13:19.500 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:19.500 "strip_size_kb": 64, 00:13:19.500 "state": "online", 00:13:19.500 "raid_level": "raid5f", 00:13:19.500 "superblock": false, 00:13:19.500 "num_base_bdevs": 3, 00:13:19.500 "num_base_bdevs_discovered": 3, 00:13:19.500 "num_base_bdevs_operational": 3, 00:13:19.500 "process": { 00:13:19.500 "type": "rebuild", 00:13:19.500 "target": "spare", 00:13:19.500 "progress": { 00:13:19.500 "blocks": 20480, 00:13:19.500 "percent": 15 00:13:19.500 } 00:13:19.500 }, 00:13:19.500 "base_bdevs_list": [ 00:13:19.500 { 00:13:19.500 "name": "spare", 00:13:19.500 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:19.500 "is_configured": true, 00:13:19.500 "data_offset": 0, 00:13:19.500 "data_size": 65536 00:13:19.500 }, 00:13:19.500 { 00:13:19.500 "name": "BaseBdev2", 00:13:19.500 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:19.500 "is_configured": true, 00:13:19.500 "data_offset": 0, 00:13:19.500 "data_size": 65536 00:13:19.500 }, 00:13:19.500 { 00:13:19.500 "name": "BaseBdev3", 00:13:19.500 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:19.500 "is_configured": true, 00:13:19.500 "data_offset": 0, 00:13:19.500 "data_size": 65536 00:13:19.500 } 00:13:19.500 ] 00:13:19.500 }' 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.500 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.500 [2024-12-07 01:57:24.908951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.765 [2024-12-07 01:57:24.979996] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.765 [2024-12-07 01:57:24.980058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.765 [2024-12-07 01:57:24.980074] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.765 [2024-12-07 01:57:24.980084] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.765 01:57:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.765 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.765 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.765 "name": "raid_bdev1", 00:13:19.765 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:19.765 "strip_size_kb": 64, 00:13:19.765 "state": "online", 00:13:19.765 "raid_level": "raid5f", 00:13:19.765 "superblock": false, 00:13:19.765 "num_base_bdevs": 3, 00:13:19.765 "num_base_bdevs_discovered": 2, 00:13:19.765 "num_base_bdevs_operational": 2, 00:13:19.765 "base_bdevs_list": [ 00:13:19.765 { 00:13:19.765 "name": null, 00:13:19.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.765 "is_configured": false, 00:13:19.765 "data_offset": 0, 00:13:19.765 "data_size": 65536 00:13:19.765 }, 00:13:19.765 { 00:13:19.765 "name": "BaseBdev2", 00:13:19.765 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:19.765 "is_configured": true, 00:13:19.765 "data_offset": 0, 00:13:19.765 "data_size": 65536 00:13:19.765 }, 00:13:19.765 { 00:13:19.765 "name": "BaseBdev3", 00:13:19.765 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:19.765 "is_configured": true, 00:13:19.765 "data_offset": 0, 00:13:19.765 "data_size": 65536 00:13:19.765 } 00:13:19.766 ] 00:13:19.766 }' 00:13:19.766 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.766 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.039 "name": "raid_bdev1", 00:13:20.039 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:20.039 "strip_size_kb": 64, 00:13:20.039 "state": "online", 00:13:20.039 "raid_level": "raid5f", 00:13:20.039 "superblock": false, 00:13:20.039 "num_base_bdevs": 3, 00:13:20.039 "num_base_bdevs_discovered": 2, 00:13:20.039 "num_base_bdevs_operational": 2, 00:13:20.039 "base_bdevs_list": [ 00:13:20.039 { 00:13:20.039 "name": null, 00:13:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.039 "is_configured": false, 00:13:20.039 "data_offset": 0, 00:13:20.039 "data_size": 65536 00:13:20.039 }, 00:13:20.039 { 00:13:20.039 "name": "BaseBdev2", 00:13:20.039 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:20.039 "is_configured": true, 00:13:20.039 "data_offset": 0, 00:13:20.039 "data_size": 65536 00:13:20.039 }, 00:13:20.039 { 00:13:20.039 "name": "BaseBdev3", 00:13:20.039 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:20.039 "is_configured": true, 00:13:20.039 "data_offset": 0, 00:13:20.039 "data_size": 65536 00:13:20.039 } 00:13:20.039 ] 00:13:20.039 }' 00:13:20.039 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.299 [2024-12-07 01:57:25.580719] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.299 [2024-12-07 01:57:25.584416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:13:20.299 [2024-12-07 01:57:25.586508] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.299 01:57:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.234 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.234 "name": "raid_bdev1", 00:13:21.235 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:21.235 "strip_size_kb": 64, 00:13:21.235 "state": "online", 00:13:21.235 "raid_level": "raid5f", 00:13:21.235 "superblock": false, 00:13:21.235 "num_base_bdevs": 3, 00:13:21.235 "num_base_bdevs_discovered": 3, 00:13:21.235 "num_base_bdevs_operational": 3, 00:13:21.235 "process": { 00:13:21.235 "type": "rebuild", 00:13:21.235 "target": "spare", 00:13:21.235 "progress": { 00:13:21.235 "blocks": 20480, 00:13:21.235 "percent": 15 00:13:21.235 } 00:13:21.235 }, 00:13:21.235 "base_bdevs_list": [ 00:13:21.235 { 00:13:21.235 "name": "spare", 00:13:21.235 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:21.235 "is_configured": true, 00:13:21.235 "data_offset": 0, 00:13:21.235 "data_size": 65536 00:13:21.235 }, 00:13:21.235 { 00:13:21.235 "name": "BaseBdev2", 00:13:21.235 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:21.235 "is_configured": true, 00:13:21.235 "data_offset": 0, 00:13:21.235 "data_size": 65536 00:13:21.235 }, 00:13:21.235 { 00:13:21.235 "name": "BaseBdev3", 00:13:21.235 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:21.235 "is_configured": true, 00:13:21.235 "data_offset": 0, 00:13:21.235 "data_size": 65536 00:13:21.235 } 00:13:21.235 ] 00:13:21.235 }' 00:13:21.235 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.235 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=444 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.494 "name": "raid_bdev1", 00:13:21.494 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:21.494 "strip_size_kb": 64, 00:13:21.494 "state": "online", 00:13:21.494 "raid_level": "raid5f", 00:13:21.494 "superblock": false, 00:13:21.494 "num_base_bdevs": 3, 00:13:21.494 "num_base_bdevs_discovered": 3, 00:13:21.494 "num_base_bdevs_operational": 3, 00:13:21.494 "process": { 00:13:21.494 "type": "rebuild", 00:13:21.494 "target": "spare", 00:13:21.494 "progress": { 00:13:21.494 "blocks": 22528, 00:13:21.494 "percent": 17 00:13:21.494 } 00:13:21.494 }, 00:13:21.494 "base_bdevs_list": [ 00:13:21.494 { 00:13:21.494 "name": "spare", 00:13:21.494 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:21.494 "is_configured": true, 00:13:21.494 "data_offset": 0, 00:13:21.494 "data_size": 65536 00:13:21.494 }, 00:13:21.494 { 00:13:21.494 "name": "BaseBdev2", 00:13:21.494 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:21.494 "is_configured": true, 00:13:21.494 "data_offset": 0, 00:13:21.494 "data_size": 65536 00:13:21.494 }, 00:13:21.494 { 00:13:21.494 "name": "BaseBdev3", 00:13:21.494 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:21.494 "is_configured": true, 00:13:21.494 "data_offset": 0, 00:13:21.494 "data_size": 65536 00:13:21.494 } 00:13:21.494 ] 00:13:21.494 }' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.494 01:57:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.430 01:57:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.689 01:57:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.689 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.689 "name": "raid_bdev1", 00:13:22.689 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:22.689 "strip_size_kb": 64, 00:13:22.689 "state": "online", 00:13:22.689 "raid_level": "raid5f", 00:13:22.689 "superblock": false, 00:13:22.689 "num_base_bdevs": 3, 00:13:22.689 "num_base_bdevs_discovered": 3, 00:13:22.689 "num_base_bdevs_operational": 3, 00:13:22.689 "process": { 00:13:22.689 "type": "rebuild", 00:13:22.689 "target": "spare", 00:13:22.689 "progress": { 00:13:22.689 "blocks": 45056, 00:13:22.689 "percent": 34 00:13:22.689 } 00:13:22.689 }, 00:13:22.689 "base_bdevs_list": [ 00:13:22.689 { 00:13:22.689 "name": "spare", 00:13:22.689 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:22.689 "is_configured": true, 00:13:22.689 "data_offset": 0, 00:13:22.689 "data_size": 65536 00:13:22.689 }, 00:13:22.689 { 00:13:22.689 "name": "BaseBdev2", 00:13:22.689 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:22.689 "is_configured": true, 00:13:22.689 "data_offset": 0, 00:13:22.689 "data_size": 65536 00:13:22.689 }, 00:13:22.689 { 00:13:22.689 "name": "BaseBdev3", 00:13:22.689 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:22.689 "is_configured": true, 00:13:22.689 "data_offset": 0, 00:13:22.689 "data_size": 65536 00:13:22.689 } 00:13:22.689 ] 00:13:22.689 }' 00:13:22.689 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.689 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.689 01:57:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.689 01:57:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.689 01:57:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.640 "name": "raid_bdev1", 00:13:23.640 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:23.640 "strip_size_kb": 64, 00:13:23.640 "state": "online", 00:13:23.640 "raid_level": "raid5f", 00:13:23.640 "superblock": false, 00:13:23.640 "num_base_bdevs": 3, 00:13:23.640 "num_base_bdevs_discovered": 3, 00:13:23.640 "num_base_bdevs_operational": 3, 00:13:23.640 "process": { 00:13:23.640 "type": "rebuild", 00:13:23.640 "target": "spare", 00:13:23.640 "progress": { 00:13:23.640 "blocks": 69632, 00:13:23.640 "percent": 53 00:13:23.640 } 00:13:23.640 }, 00:13:23.640 "base_bdevs_list": [ 00:13:23.640 { 00:13:23.640 "name": "spare", 00:13:23.640 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:23.640 "is_configured": true, 00:13:23.640 "data_offset": 0, 00:13:23.640 "data_size": 65536 00:13:23.640 }, 00:13:23.640 { 00:13:23.640 "name": "BaseBdev2", 00:13:23.640 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:23.640 "is_configured": true, 00:13:23.640 "data_offset": 0, 00:13:23.640 "data_size": 65536 00:13:23.640 }, 00:13:23.640 { 00:13:23.640 "name": "BaseBdev3", 00:13:23.640 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:23.640 "is_configured": true, 00:13:23.640 "data_offset": 0, 00:13:23.640 "data_size": 65536 00:13:23.640 } 00:13:23.640 ] 00:13:23.640 }' 00:13:23.640 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.898 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.898 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.898 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.898 01:57:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.833 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.833 "name": "raid_bdev1", 00:13:24.833 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:24.833 "strip_size_kb": 64, 00:13:24.833 "state": "online", 00:13:24.833 "raid_level": "raid5f", 00:13:24.833 "superblock": false, 00:13:24.833 "num_base_bdevs": 3, 00:13:24.833 "num_base_bdevs_discovered": 3, 00:13:24.833 "num_base_bdevs_operational": 3, 00:13:24.833 "process": { 00:13:24.833 "type": "rebuild", 00:13:24.833 "target": "spare", 00:13:24.833 "progress": { 00:13:24.833 "blocks": 92160, 00:13:24.833 "percent": 70 00:13:24.833 } 00:13:24.833 }, 00:13:24.833 "base_bdevs_list": [ 00:13:24.833 { 00:13:24.833 "name": "spare", 00:13:24.833 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:24.833 "is_configured": true, 00:13:24.833 "data_offset": 0, 00:13:24.834 "data_size": 65536 00:13:24.834 }, 00:13:24.834 { 00:13:24.834 "name": "BaseBdev2", 00:13:24.834 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:24.834 "is_configured": true, 00:13:24.834 "data_offset": 0, 00:13:24.834 "data_size": 65536 00:13:24.834 }, 00:13:24.834 { 00:13:24.834 "name": "BaseBdev3", 00:13:24.834 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:24.834 "is_configured": true, 00:13:24.834 "data_offset": 0, 00:13:24.834 "data_size": 65536 00:13:24.834 } 00:13:24.834 ] 00:13:24.834 }' 00:13:24.834 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.834 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.834 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.834 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.834 01:57:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.211 "name": "raid_bdev1", 00:13:26.211 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:26.211 "strip_size_kb": 64, 00:13:26.211 "state": "online", 00:13:26.211 "raid_level": "raid5f", 00:13:26.211 "superblock": false, 00:13:26.211 "num_base_bdevs": 3, 00:13:26.211 "num_base_bdevs_discovered": 3, 00:13:26.211 "num_base_bdevs_operational": 3, 00:13:26.211 "process": { 00:13:26.211 "type": "rebuild", 00:13:26.211 "target": "spare", 00:13:26.211 "progress": { 00:13:26.211 "blocks": 114688, 00:13:26.211 "percent": 87 00:13:26.211 } 00:13:26.211 }, 00:13:26.211 "base_bdevs_list": [ 00:13:26.211 { 00:13:26.211 "name": "spare", 00:13:26.211 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:26.211 "is_configured": true, 00:13:26.211 "data_offset": 0, 00:13:26.211 "data_size": 65536 00:13:26.211 }, 00:13:26.211 { 00:13:26.211 "name": "BaseBdev2", 00:13:26.211 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:26.211 "is_configured": true, 00:13:26.211 "data_offset": 0, 00:13:26.211 "data_size": 65536 00:13:26.211 }, 00:13:26.211 { 00:13:26.211 "name": "BaseBdev3", 00:13:26.211 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:26.211 "is_configured": true, 00:13:26.211 "data_offset": 0, 00:13:26.211 "data_size": 65536 00:13:26.211 } 00:13:26.211 ] 00:13:26.211 }' 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.211 01:57:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.779 [2024-12-07 01:57:32.022430] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:26.779 [2024-12-07 01:57:32.022531] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:26.779 [2024-12-07 01:57:32.022584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.038 "name": "raid_bdev1", 00:13:27.038 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:27.038 "strip_size_kb": 64, 00:13:27.038 "state": "online", 00:13:27.038 "raid_level": "raid5f", 00:13:27.038 "superblock": false, 00:13:27.038 "num_base_bdevs": 3, 00:13:27.038 "num_base_bdevs_discovered": 3, 00:13:27.038 "num_base_bdevs_operational": 3, 00:13:27.038 "base_bdevs_list": [ 00:13:27.038 { 00:13:27.038 "name": "spare", 00:13:27.038 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:27.038 "is_configured": true, 00:13:27.038 "data_offset": 0, 00:13:27.038 "data_size": 65536 00:13:27.038 }, 00:13:27.038 { 00:13:27.038 "name": "BaseBdev2", 00:13:27.038 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:27.038 "is_configured": true, 00:13:27.038 "data_offset": 0, 00:13:27.038 "data_size": 65536 00:13:27.038 }, 00:13:27.038 { 00:13:27.038 "name": "BaseBdev3", 00:13:27.038 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:27.038 "is_configured": true, 00:13:27.038 "data_offset": 0, 00:13:27.038 "data_size": 65536 00:13:27.038 } 00:13:27.038 ] 00:13:27.038 }' 00:13:27.038 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.298 "name": "raid_bdev1", 00:13:27.298 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:27.298 "strip_size_kb": 64, 00:13:27.298 "state": "online", 00:13:27.298 "raid_level": "raid5f", 00:13:27.298 "superblock": false, 00:13:27.298 "num_base_bdevs": 3, 00:13:27.298 "num_base_bdevs_discovered": 3, 00:13:27.298 "num_base_bdevs_operational": 3, 00:13:27.298 "base_bdevs_list": [ 00:13:27.298 { 00:13:27.298 "name": "spare", 00:13:27.298 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 }, 00:13:27.298 { 00:13:27.298 "name": "BaseBdev2", 00:13:27.298 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 }, 00:13:27.298 { 00:13:27.298 "name": "BaseBdev3", 00:13:27.298 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 } 00:13:27.298 ] 00:13:27.298 }' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.298 "name": "raid_bdev1", 00:13:27.298 "uuid": "14fd216e-3d5a-4598-aef2-5a77ade1699c", 00:13:27.298 "strip_size_kb": 64, 00:13:27.298 "state": "online", 00:13:27.298 "raid_level": "raid5f", 00:13:27.298 "superblock": false, 00:13:27.298 "num_base_bdevs": 3, 00:13:27.298 "num_base_bdevs_discovered": 3, 00:13:27.298 "num_base_bdevs_operational": 3, 00:13:27.298 "base_bdevs_list": [ 00:13:27.298 { 00:13:27.298 "name": "spare", 00:13:27.298 "uuid": "ebb75966-816d-53ab-938c-6129cc1c1aef", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 }, 00:13:27.298 { 00:13:27.298 "name": "BaseBdev2", 00:13:27.298 "uuid": "9f1dbce1-48e3-5d41-9593-c9e214db9b88", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 }, 00:13:27.298 { 00:13:27.298 "name": "BaseBdev3", 00:13:27.298 "uuid": "3157726c-fdf4-5e2a-aca6-d47ed914a5df", 00:13:27.298 "is_configured": true, 00:13:27.298 "data_offset": 0, 00:13:27.298 "data_size": 65536 00:13:27.298 } 00:13:27.298 ] 00:13:27.298 }' 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.298 01:57:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.865 [2024-12-07 01:57:33.113756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.865 [2024-12-07 01:57:33.113790] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.865 [2024-12-07 01:57:33.113881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.865 [2024-12-07 01:57:33.113980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.865 [2024-12-07 01:57:33.113996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:27.865 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.124 /dev/nbd0 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.124 1+0 records in 00:13:28.124 1+0 records out 00:13:28.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189666 s, 21.6 MB/s 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.124 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:28.383 /dev/nbd1 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.383 1+0 records in 00:13:28.383 1+0 records out 00:13:28.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514018 s, 8.0 MB/s 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.383 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.641 01:57:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:28.641 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91792 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 91792 ']' 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 91792 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91792 00:13:28.900 killing process with pid 91792 00:13:28.900 Received shutdown signal, test time was about 60.000000 seconds 00:13:28.900 00:13:28.900 Latency(us) 00:13:28.900 [2024-12-07T01:57:34.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.900 [2024-12-07T01:57:34.362Z] =================================================================================================================== 00:13:28.900 [2024-12-07T01:57:34.362Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91792' 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 91792 00:13:28.900 [2024-12-07 01:57:34.152011] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:28.900 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 91792 00:13:28.900 [2024-12-07 01:57:34.191221] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:29.160 00:13:29.160 real 0m13.408s 00:13:29.160 user 0m16.694s 00:13:29.160 sys 0m1.876s 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.160 ************************************ 00:13:29.160 END TEST raid5f_rebuild_test 00:13:29.160 ************************************ 00:13:29.160 01:57:34 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:29.160 01:57:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:29.160 01:57:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.160 01:57:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:29.160 ************************************ 00:13:29.160 START TEST raid5f_rebuild_test_sb 00:13:29.160 ************************************ 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92211 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92211 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 92211 ']' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.160 01:57:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.160 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:29.160 Zero copy mechanism will not be used. 00:13:29.160 [2024-12-07 01:57:34.576593] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:29.160 [2024-12-07 01:57:34.576749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92211 ] 00:13:29.419 [2024-12-07 01:57:34.721382] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.419 [2024-12-07 01:57:34.766591] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.419 [2024-12-07 01:57:34.807471] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.419 [2024-12-07 01:57:34.807509] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.986 BaseBdev1_malloc 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.986 [2024-12-07 01:57:35.416491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:29.986 [2024-12-07 01:57:35.416571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.986 [2024-12-07 01:57:35.416606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:29.986 [2024-12-07 01:57:35.416621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.986 [2024-12-07 01:57:35.418742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.986 [2024-12-07 01:57:35.418781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:29.986 BaseBdev1 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.986 BaseBdev2_malloc 00:13:29.986 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 [2024-12-07 01:57:35.453573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:30.244 [2024-12-07 01:57:35.453632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.244 [2024-12-07 01:57:35.453658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:30.244 [2024-12-07 01:57:35.453683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.244 [2024-12-07 01:57:35.455965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.244 [2024-12-07 01:57:35.456005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:30.244 BaseBdev2 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 BaseBdev3_malloc 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 [2024-12-07 01:57:35.482047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:30.244 [2024-12-07 01:57:35.482101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.244 [2024-12-07 01:57:35.482144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:30.244 [2024-12-07 01:57:35.482153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.244 [2024-12-07 01:57:35.484194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.244 [2024-12-07 01:57:35.484226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:30.244 BaseBdev3 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 spare_malloc 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 spare_delay 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 [2024-12-07 01:57:35.522317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:30.244 [2024-12-07 01:57:35.522367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.244 [2024-12-07 01:57:35.522408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:30.244 [2024-12-07 01:57:35.522417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.244 [2024-12-07 01:57:35.524485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.244 [2024-12-07 01:57:35.524517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:30.244 spare 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.244 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.244 [2024-12-07 01:57:35.534361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.244 [2024-12-07 01:57:35.536167] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.244 [2024-12-07 01:57:35.536232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.244 [2024-12-07 01:57:35.536378] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:30.244 [2024-12-07 01:57:35.536393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:30.244 [2024-12-07 01:57:35.536639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:30.244 [2024-12-07 01:57:35.537072] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:30.244 [2024-12-07 01:57:35.537092] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:30.244 [2024-12-07 01:57:35.537207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.245 "name": "raid_bdev1", 00:13:30.245 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:30.245 "strip_size_kb": 64, 00:13:30.245 "state": "online", 00:13:30.245 "raid_level": "raid5f", 00:13:30.245 "superblock": true, 00:13:30.245 "num_base_bdevs": 3, 00:13:30.245 "num_base_bdevs_discovered": 3, 00:13:30.245 "num_base_bdevs_operational": 3, 00:13:30.245 "base_bdevs_list": [ 00:13:30.245 { 00:13:30.245 "name": "BaseBdev1", 00:13:30.245 "uuid": "30765a18-4881-5200-b081-70e490d466fc", 00:13:30.245 "is_configured": true, 00:13:30.245 "data_offset": 2048, 00:13:30.245 "data_size": 63488 00:13:30.245 }, 00:13:30.245 { 00:13:30.245 "name": "BaseBdev2", 00:13:30.245 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:30.245 "is_configured": true, 00:13:30.245 "data_offset": 2048, 00:13:30.245 "data_size": 63488 00:13:30.245 }, 00:13:30.245 { 00:13:30.245 "name": "BaseBdev3", 00:13:30.245 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:30.245 "is_configured": true, 00:13:30.245 "data_offset": 2048, 00:13:30.245 "data_size": 63488 00:13:30.245 } 00:13:30.245 ] 00:13:30.245 }' 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.245 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.502 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:30.502 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:30.502 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.502 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.761 [2024-12-07 01:57:35.965986] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.761 01:57:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.761 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:31.020 [2024-12-07 01:57:36.237373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:31.020 /dev/nbd0 00:13:31.020 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:31.020 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:31.020 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:31.020 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:31.020 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.021 1+0 records in 00:13:31.021 1+0 records out 00:13:31.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404395 s, 10.1 MB/s 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:31.021 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:31.279 496+0 records in 00:13:31.279 496+0 records out 00:13:31.279 65011712 bytes (65 MB, 62 MiB) copied, 0.283934 s, 229 MB/s 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.279 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.538 [2024-12-07 01:57:36.821691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.538 [2024-12-07 01:57:36.838935] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.538 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.539 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.539 "name": "raid_bdev1", 00:13:31.539 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:31.539 "strip_size_kb": 64, 00:13:31.539 "state": "online", 00:13:31.539 "raid_level": "raid5f", 00:13:31.539 "superblock": true, 00:13:31.539 "num_base_bdevs": 3, 00:13:31.539 "num_base_bdevs_discovered": 2, 00:13:31.539 "num_base_bdevs_operational": 2, 00:13:31.539 "base_bdevs_list": [ 00:13:31.539 { 00:13:31.539 "name": null, 00:13:31.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.539 "is_configured": false, 00:13:31.539 "data_offset": 0, 00:13:31.539 "data_size": 63488 00:13:31.539 }, 00:13:31.539 { 00:13:31.539 "name": "BaseBdev2", 00:13:31.539 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:31.539 "is_configured": true, 00:13:31.539 "data_offset": 2048, 00:13:31.539 "data_size": 63488 00:13:31.539 }, 00:13:31.539 { 00:13:31.539 "name": "BaseBdev3", 00:13:31.539 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:31.539 "is_configured": true, 00:13:31.539 "data_offset": 2048, 00:13:31.539 "data_size": 63488 00:13:31.539 } 00:13:31.539 ] 00:13:31.539 }' 00:13:31.539 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.539 01:57:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.106 01:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.106 01:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.106 01:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.106 [2024-12-07 01:57:37.266250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.106 [2024-12-07 01:57:37.270420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:13:32.106 01:57:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.107 01:57:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:32.107 [2024-12-07 01:57:37.273017] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.043 "name": "raid_bdev1", 00:13:33.043 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:33.043 "strip_size_kb": 64, 00:13:33.043 "state": "online", 00:13:33.043 "raid_level": "raid5f", 00:13:33.043 "superblock": true, 00:13:33.043 "num_base_bdevs": 3, 00:13:33.043 "num_base_bdevs_discovered": 3, 00:13:33.043 "num_base_bdevs_operational": 3, 00:13:33.043 "process": { 00:13:33.043 "type": "rebuild", 00:13:33.043 "target": "spare", 00:13:33.043 "progress": { 00:13:33.043 "blocks": 20480, 00:13:33.043 "percent": 16 00:13:33.043 } 00:13:33.043 }, 00:13:33.043 "base_bdevs_list": [ 00:13:33.043 { 00:13:33.043 "name": "spare", 00:13:33.043 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:33.043 "is_configured": true, 00:13:33.043 "data_offset": 2048, 00:13:33.043 "data_size": 63488 00:13:33.043 }, 00:13:33.043 { 00:13:33.043 "name": "BaseBdev2", 00:13:33.043 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:33.043 "is_configured": true, 00:13:33.043 "data_offset": 2048, 00:13:33.043 "data_size": 63488 00:13:33.043 }, 00:13:33.043 { 00:13:33.043 "name": "BaseBdev3", 00:13:33.043 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:33.043 "is_configured": true, 00:13:33.043 "data_offset": 2048, 00:13:33.043 "data_size": 63488 00:13:33.043 } 00:13:33.043 ] 00:13:33.043 }' 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.043 [2024-12-07 01:57:38.421231] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.043 [2024-12-07 01:57:38.480098] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.043 [2024-12-07 01:57:38.480183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.043 [2024-12-07 01:57:38.480199] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.043 [2024-12-07 01:57:38.480208] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.043 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.044 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.044 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.302 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.302 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.302 "name": "raid_bdev1", 00:13:33.302 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:33.302 "strip_size_kb": 64, 00:13:33.302 "state": "online", 00:13:33.302 "raid_level": "raid5f", 00:13:33.302 "superblock": true, 00:13:33.302 "num_base_bdevs": 3, 00:13:33.302 "num_base_bdevs_discovered": 2, 00:13:33.302 "num_base_bdevs_operational": 2, 00:13:33.302 "base_bdevs_list": [ 00:13:33.302 { 00:13:33.302 "name": null, 00:13:33.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.302 "is_configured": false, 00:13:33.302 "data_offset": 0, 00:13:33.302 "data_size": 63488 00:13:33.302 }, 00:13:33.302 { 00:13:33.302 "name": "BaseBdev2", 00:13:33.302 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:33.302 "is_configured": true, 00:13:33.302 "data_offset": 2048, 00:13:33.302 "data_size": 63488 00:13:33.302 }, 00:13:33.302 { 00:13:33.302 "name": "BaseBdev3", 00:13:33.302 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:33.302 "is_configured": true, 00:13:33.302 "data_offset": 2048, 00:13:33.302 "data_size": 63488 00:13:33.302 } 00:13:33.302 ] 00:13:33.302 }' 00:13:33.302 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.302 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.561 "name": "raid_bdev1", 00:13:33.561 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:33.561 "strip_size_kb": 64, 00:13:33.561 "state": "online", 00:13:33.561 "raid_level": "raid5f", 00:13:33.561 "superblock": true, 00:13:33.561 "num_base_bdevs": 3, 00:13:33.561 "num_base_bdevs_discovered": 2, 00:13:33.561 "num_base_bdevs_operational": 2, 00:13:33.561 "base_bdevs_list": [ 00:13:33.561 { 00:13:33.561 "name": null, 00:13:33.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.561 "is_configured": false, 00:13:33.561 "data_offset": 0, 00:13:33.561 "data_size": 63488 00:13:33.561 }, 00:13:33.561 { 00:13:33.561 "name": "BaseBdev2", 00:13:33.561 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:33.561 "is_configured": true, 00:13:33.561 "data_offset": 2048, 00:13:33.561 "data_size": 63488 00:13:33.561 }, 00:13:33.561 { 00:13:33.561 "name": "BaseBdev3", 00:13:33.561 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:33.561 "is_configured": true, 00:13:33.561 "data_offset": 2048, 00:13:33.561 "data_size": 63488 00:13:33.561 } 00:13:33.561 ] 00:13:33.561 }' 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.561 01:57:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.819 01:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.819 01:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.819 01:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.819 01:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.819 [2024-12-07 01:57:39.028684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.819 [2024-12-07 01:57:39.032218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:13:33.819 [2024-12-07 01:57:39.034326] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.819 01:57:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.819 01:57:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.771 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.771 "name": "raid_bdev1", 00:13:34.771 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:34.771 "strip_size_kb": 64, 00:13:34.771 "state": "online", 00:13:34.771 "raid_level": "raid5f", 00:13:34.771 "superblock": true, 00:13:34.771 "num_base_bdevs": 3, 00:13:34.771 "num_base_bdevs_discovered": 3, 00:13:34.771 "num_base_bdevs_operational": 3, 00:13:34.771 "process": { 00:13:34.771 "type": "rebuild", 00:13:34.771 "target": "spare", 00:13:34.771 "progress": { 00:13:34.771 "blocks": 20480, 00:13:34.772 "percent": 16 00:13:34.772 } 00:13:34.772 }, 00:13:34.772 "base_bdevs_list": [ 00:13:34.772 { 00:13:34.772 "name": "spare", 00:13:34.772 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:34.772 "is_configured": true, 00:13:34.772 "data_offset": 2048, 00:13:34.772 "data_size": 63488 00:13:34.772 }, 00:13:34.772 { 00:13:34.772 "name": "BaseBdev2", 00:13:34.772 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:34.772 "is_configured": true, 00:13:34.772 "data_offset": 2048, 00:13:34.772 "data_size": 63488 00:13:34.772 }, 00:13:34.772 { 00:13:34.772 "name": "BaseBdev3", 00:13:34.772 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:34.772 "is_configured": true, 00:13:34.772 "data_offset": 2048, 00:13:34.772 "data_size": 63488 00:13:34.772 } 00:13:34.772 ] 00:13:34.772 }' 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:34.772 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=458 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.772 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.029 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.029 "name": "raid_bdev1", 00:13:35.029 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:35.029 "strip_size_kb": 64, 00:13:35.029 "state": "online", 00:13:35.029 "raid_level": "raid5f", 00:13:35.029 "superblock": true, 00:13:35.029 "num_base_bdevs": 3, 00:13:35.029 "num_base_bdevs_discovered": 3, 00:13:35.029 "num_base_bdevs_operational": 3, 00:13:35.029 "process": { 00:13:35.029 "type": "rebuild", 00:13:35.029 "target": "spare", 00:13:35.029 "progress": { 00:13:35.029 "blocks": 22528, 00:13:35.029 "percent": 17 00:13:35.029 } 00:13:35.029 }, 00:13:35.029 "base_bdevs_list": [ 00:13:35.029 { 00:13:35.029 "name": "spare", 00:13:35.029 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:35.029 "is_configured": true, 00:13:35.029 "data_offset": 2048, 00:13:35.029 "data_size": 63488 00:13:35.029 }, 00:13:35.029 { 00:13:35.029 "name": "BaseBdev2", 00:13:35.029 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:35.029 "is_configured": true, 00:13:35.029 "data_offset": 2048, 00:13:35.029 "data_size": 63488 00:13:35.029 }, 00:13:35.029 { 00:13:35.029 "name": "BaseBdev3", 00:13:35.029 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:35.029 "is_configured": true, 00:13:35.029 "data_offset": 2048, 00:13:35.029 "data_size": 63488 00:13:35.029 } 00:13:35.029 ] 00:13:35.029 }' 00:13:35.029 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.029 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.030 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.030 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.030 01:57:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.961 "name": "raid_bdev1", 00:13:35.961 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:35.961 "strip_size_kb": 64, 00:13:35.961 "state": "online", 00:13:35.961 "raid_level": "raid5f", 00:13:35.961 "superblock": true, 00:13:35.961 "num_base_bdevs": 3, 00:13:35.961 "num_base_bdevs_discovered": 3, 00:13:35.961 "num_base_bdevs_operational": 3, 00:13:35.961 "process": { 00:13:35.961 "type": "rebuild", 00:13:35.961 "target": "spare", 00:13:35.961 "progress": { 00:13:35.961 "blocks": 45056, 00:13:35.961 "percent": 35 00:13:35.961 } 00:13:35.961 }, 00:13:35.961 "base_bdevs_list": [ 00:13:35.961 { 00:13:35.961 "name": "spare", 00:13:35.961 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:35.961 "is_configured": true, 00:13:35.961 "data_offset": 2048, 00:13:35.961 "data_size": 63488 00:13:35.961 }, 00:13:35.961 { 00:13:35.961 "name": "BaseBdev2", 00:13:35.961 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:35.961 "is_configured": true, 00:13:35.961 "data_offset": 2048, 00:13:35.961 "data_size": 63488 00:13:35.961 }, 00:13:35.961 { 00:13:35.961 "name": "BaseBdev3", 00:13:35.961 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:35.961 "is_configured": true, 00:13:35.961 "data_offset": 2048, 00:13:35.961 "data_size": 63488 00:13:35.961 } 00:13:35.961 ] 00:13:35.961 }' 00:13:35.961 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.218 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.218 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.218 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.218 01:57:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.150 "name": "raid_bdev1", 00:13:37.150 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:37.150 "strip_size_kb": 64, 00:13:37.150 "state": "online", 00:13:37.150 "raid_level": "raid5f", 00:13:37.150 "superblock": true, 00:13:37.150 "num_base_bdevs": 3, 00:13:37.150 "num_base_bdevs_discovered": 3, 00:13:37.150 "num_base_bdevs_operational": 3, 00:13:37.150 "process": { 00:13:37.150 "type": "rebuild", 00:13:37.150 "target": "spare", 00:13:37.150 "progress": { 00:13:37.150 "blocks": 69632, 00:13:37.150 "percent": 54 00:13:37.150 } 00:13:37.150 }, 00:13:37.150 "base_bdevs_list": [ 00:13:37.150 { 00:13:37.150 "name": "spare", 00:13:37.150 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:37.150 "is_configured": true, 00:13:37.150 "data_offset": 2048, 00:13:37.150 "data_size": 63488 00:13:37.150 }, 00:13:37.150 { 00:13:37.150 "name": "BaseBdev2", 00:13:37.150 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:37.150 "is_configured": true, 00:13:37.150 "data_offset": 2048, 00:13:37.150 "data_size": 63488 00:13:37.150 }, 00:13:37.150 { 00:13:37.150 "name": "BaseBdev3", 00:13:37.150 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:37.150 "is_configured": true, 00:13:37.150 "data_offset": 2048, 00:13:37.150 "data_size": 63488 00:13:37.150 } 00:13:37.150 ] 00:13:37.150 }' 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.150 01:57:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.521 "name": "raid_bdev1", 00:13:38.521 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:38.521 "strip_size_kb": 64, 00:13:38.521 "state": "online", 00:13:38.521 "raid_level": "raid5f", 00:13:38.521 "superblock": true, 00:13:38.521 "num_base_bdevs": 3, 00:13:38.521 "num_base_bdevs_discovered": 3, 00:13:38.521 "num_base_bdevs_operational": 3, 00:13:38.521 "process": { 00:13:38.521 "type": "rebuild", 00:13:38.521 "target": "spare", 00:13:38.521 "progress": { 00:13:38.521 "blocks": 92160, 00:13:38.521 "percent": 72 00:13:38.521 } 00:13:38.521 }, 00:13:38.521 "base_bdevs_list": [ 00:13:38.521 { 00:13:38.521 "name": "spare", 00:13:38.521 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:38.521 "is_configured": true, 00:13:38.521 "data_offset": 2048, 00:13:38.521 "data_size": 63488 00:13:38.521 }, 00:13:38.521 { 00:13:38.521 "name": "BaseBdev2", 00:13:38.521 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:38.521 "is_configured": true, 00:13:38.521 "data_offset": 2048, 00:13:38.521 "data_size": 63488 00:13:38.521 }, 00:13:38.521 { 00:13:38.521 "name": "BaseBdev3", 00:13:38.521 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:38.521 "is_configured": true, 00:13:38.521 "data_offset": 2048, 00:13:38.521 "data_size": 63488 00:13:38.521 } 00:13:38.521 ] 00:13:38.521 }' 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.521 01:57:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.454 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.454 "name": "raid_bdev1", 00:13:39.454 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:39.454 "strip_size_kb": 64, 00:13:39.455 "state": "online", 00:13:39.455 "raid_level": "raid5f", 00:13:39.455 "superblock": true, 00:13:39.455 "num_base_bdevs": 3, 00:13:39.455 "num_base_bdevs_discovered": 3, 00:13:39.455 "num_base_bdevs_operational": 3, 00:13:39.455 "process": { 00:13:39.455 "type": "rebuild", 00:13:39.455 "target": "spare", 00:13:39.455 "progress": { 00:13:39.455 "blocks": 116736, 00:13:39.455 "percent": 91 00:13:39.455 } 00:13:39.455 }, 00:13:39.455 "base_bdevs_list": [ 00:13:39.455 { 00:13:39.455 "name": "spare", 00:13:39.455 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:39.455 "is_configured": true, 00:13:39.455 "data_offset": 2048, 00:13:39.455 "data_size": 63488 00:13:39.455 }, 00:13:39.455 { 00:13:39.455 "name": "BaseBdev2", 00:13:39.455 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:39.455 "is_configured": true, 00:13:39.455 "data_offset": 2048, 00:13:39.455 "data_size": 63488 00:13:39.455 }, 00:13:39.455 { 00:13:39.455 "name": "BaseBdev3", 00:13:39.455 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:39.455 "is_configured": true, 00:13:39.455 "data_offset": 2048, 00:13:39.455 "data_size": 63488 00:13:39.455 } 00:13:39.455 ] 00:13:39.455 }' 00:13:39.455 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.455 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.455 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.455 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.455 01:57:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.021 [2024-12-07 01:57:45.268588] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:40.021 [2024-12-07 01:57:45.268656] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:40.021 [2024-12-07 01:57:45.268768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.586 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.586 "name": "raid_bdev1", 00:13:40.586 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:40.586 "strip_size_kb": 64, 00:13:40.586 "state": "online", 00:13:40.586 "raid_level": "raid5f", 00:13:40.586 "superblock": true, 00:13:40.586 "num_base_bdevs": 3, 00:13:40.586 "num_base_bdevs_discovered": 3, 00:13:40.586 "num_base_bdevs_operational": 3, 00:13:40.586 "base_bdevs_list": [ 00:13:40.586 { 00:13:40.586 "name": "spare", 00:13:40.586 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:40.586 "is_configured": true, 00:13:40.586 "data_offset": 2048, 00:13:40.586 "data_size": 63488 00:13:40.586 }, 00:13:40.586 { 00:13:40.586 "name": "BaseBdev2", 00:13:40.586 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:40.586 "is_configured": true, 00:13:40.587 "data_offset": 2048, 00:13:40.587 "data_size": 63488 00:13:40.587 }, 00:13:40.587 { 00:13:40.587 "name": "BaseBdev3", 00:13:40.587 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:40.587 "is_configured": true, 00:13:40.587 "data_offset": 2048, 00:13:40.587 "data_size": 63488 00:13:40.587 } 00:13:40.587 ] 00:13:40.587 }' 00:13:40.587 01:57:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.587 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:40.587 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.845 "name": "raid_bdev1", 00:13:40.845 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:40.845 "strip_size_kb": 64, 00:13:40.845 "state": "online", 00:13:40.845 "raid_level": "raid5f", 00:13:40.845 "superblock": true, 00:13:40.845 "num_base_bdevs": 3, 00:13:40.845 "num_base_bdevs_discovered": 3, 00:13:40.845 "num_base_bdevs_operational": 3, 00:13:40.845 "base_bdevs_list": [ 00:13:40.845 { 00:13:40.845 "name": "spare", 00:13:40.845 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:40.845 "is_configured": true, 00:13:40.845 "data_offset": 2048, 00:13:40.845 "data_size": 63488 00:13:40.845 }, 00:13:40.845 { 00:13:40.845 "name": "BaseBdev2", 00:13:40.845 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:40.845 "is_configured": true, 00:13:40.845 "data_offset": 2048, 00:13:40.845 "data_size": 63488 00:13:40.845 }, 00:13:40.845 { 00:13:40.845 "name": "BaseBdev3", 00:13:40.845 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:40.845 "is_configured": true, 00:13:40.845 "data_offset": 2048, 00:13:40.845 "data_size": 63488 00:13:40.845 } 00:13:40.845 ] 00:13:40.845 }' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.845 "name": "raid_bdev1", 00:13:40.845 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:40.845 "strip_size_kb": 64, 00:13:40.845 "state": "online", 00:13:40.845 "raid_level": "raid5f", 00:13:40.845 "superblock": true, 00:13:40.845 "num_base_bdevs": 3, 00:13:40.845 "num_base_bdevs_discovered": 3, 00:13:40.845 "num_base_bdevs_operational": 3, 00:13:40.845 "base_bdevs_list": [ 00:13:40.845 { 00:13:40.845 "name": "spare", 00:13:40.845 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:40.845 "is_configured": true, 00:13:40.845 "data_offset": 2048, 00:13:40.845 "data_size": 63488 00:13:40.845 }, 00:13:40.845 { 00:13:40.845 "name": "BaseBdev2", 00:13:40.845 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:40.845 "is_configured": true, 00:13:40.845 "data_offset": 2048, 00:13:40.845 "data_size": 63488 00:13:40.845 }, 00:13:40.845 { 00:13:40.845 "name": "BaseBdev3", 00:13:40.845 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:40.845 "is_configured": true, 00:13:40.845 "data_offset": 2048, 00:13:40.845 "data_size": 63488 00:13:40.845 } 00:13:40.845 ] 00:13:40.845 }' 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.845 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 [2024-12-07 01:57:46.627391] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:41.411 [2024-12-07 01:57:46.627429] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:41.411 [2024-12-07 01:57:46.627535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:41.411 [2024-12-07 01:57:46.627611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:41.411 [2024-12-07 01:57:46.627621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:41.411 /dev/nbd0 00:13:41.411 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.668 1+0 records in 00:13:41.668 1+0 records out 00:13:41.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366217 s, 11.2 MB/s 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:41.668 01:57:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:41.668 /dev/nbd1 00:13:41.668 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.926 1+0 records in 00:13:41.926 1+0 records out 00:13:41.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478549 s, 8.6 MB/s 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.926 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:42.184 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.443 [2024-12-07 01:57:47.675674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:42.443 [2024-12-07 01:57:47.675727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.443 [2024-12-07 01:57:47.675775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:42.443 [2024-12-07 01:57:47.675784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.443 [2024-12-07 01:57:47.677948] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.443 [2024-12-07 01:57:47.678033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:42.443 [2024-12-07 01:57:47.678140] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:42.443 [2024-12-07 01:57:47.678213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.443 [2024-12-07 01:57:47.678370] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.443 [2024-12-07 01:57:47.678501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.443 spare 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.443 [2024-12-07 01:57:47.778421] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:42.443 [2024-12-07 01:57:47.778447] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:42.443 [2024-12-07 01:57:47.778735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:13:42.443 [2024-12-07 01:57:47.779145] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:42.443 [2024-12-07 01:57:47.779166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:42.443 [2024-12-07 01:57:47.779302] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.443 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.443 "name": "raid_bdev1", 00:13:42.443 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:42.443 "strip_size_kb": 64, 00:13:42.443 "state": "online", 00:13:42.443 "raid_level": "raid5f", 00:13:42.443 "superblock": true, 00:13:42.443 "num_base_bdevs": 3, 00:13:42.443 "num_base_bdevs_discovered": 3, 00:13:42.443 "num_base_bdevs_operational": 3, 00:13:42.443 "base_bdevs_list": [ 00:13:42.443 { 00:13:42.443 "name": "spare", 00:13:42.443 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:42.443 "is_configured": true, 00:13:42.443 "data_offset": 2048, 00:13:42.443 "data_size": 63488 00:13:42.443 }, 00:13:42.443 { 00:13:42.443 "name": "BaseBdev2", 00:13:42.443 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:42.443 "is_configured": true, 00:13:42.443 "data_offset": 2048, 00:13:42.444 "data_size": 63488 00:13:42.444 }, 00:13:42.444 { 00:13:42.444 "name": "BaseBdev3", 00:13:42.444 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:42.444 "is_configured": true, 00:13:42.444 "data_offset": 2048, 00:13:42.444 "data_size": 63488 00:13:42.444 } 00:13:42.444 ] 00:13:42.444 }' 00:13:42.444 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.444 01:57:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.011 "name": "raid_bdev1", 00:13:43.011 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:43.011 "strip_size_kb": 64, 00:13:43.011 "state": "online", 00:13:43.011 "raid_level": "raid5f", 00:13:43.011 "superblock": true, 00:13:43.011 "num_base_bdevs": 3, 00:13:43.011 "num_base_bdevs_discovered": 3, 00:13:43.011 "num_base_bdevs_operational": 3, 00:13:43.011 "base_bdevs_list": [ 00:13:43.011 { 00:13:43.011 "name": "spare", 00:13:43.011 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:43.011 "is_configured": true, 00:13:43.011 "data_offset": 2048, 00:13:43.011 "data_size": 63488 00:13:43.011 }, 00:13:43.011 { 00:13:43.011 "name": "BaseBdev2", 00:13:43.011 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:43.011 "is_configured": true, 00:13:43.011 "data_offset": 2048, 00:13:43.011 "data_size": 63488 00:13:43.011 }, 00:13:43.011 { 00:13:43.011 "name": "BaseBdev3", 00:13:43.011 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:43.011 "is_configured": true, 00:13:43.011 "data_offset": 2048, 00:13:43.011 "data_size": 63488 00:13:43.011 } 00:13:43.011 ] 00:13:43.011 }' 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.011 [2024-12-07 01:57:48.435510] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.011 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.270 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.270 "name": "raid_bdev1", 00:13:43.270 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:43.270 "strip_size_kb": 64, 00:13:43.270 "state": "online", 00:13:43.270 "raid_level": "raid5f", 00:13:43.270 "superblock": true, 00:13:43.270 "num_base_bdevs": 3, 00:13:43.270 "num_base_bdevs_discovered": 2, 00:13:43.270 "num_base_bdevs_operational": 2, 00:13:43.270 "base_bdevs_list": [ 00:13:43.270 { 00:13:43.270 "name": null, 00:13:43.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.270 "is_configured": false, 00:13:43.270 "data_offset": 0, 00:13:43.270 "data_size": 63488 00:13:43.270 }, 00:13:43.270 { 00:13:43.270 "name": "BaseBdev2", 00:13:43.270 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:43.270 "is_configured": true, 00:13:43.270 "data_offset": 2048, 00:13:43.270 "data_size": 63488 00:13:43.270 }, 00:13:43.270 { 00:13:43.270 "name": "BaseBdev3", 00:13:43.270 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:43.270 "is_configured": true, 00:13:43.270 "data_offset": 2048, 00:13:43.270 "data_size": 63488 00:13:43.270 } 00:13:43.270 ] 00:13:43.270 }' 00:13:43.270 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.270 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.530 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:43.530 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.530 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.530 [2024-12-07 01:57:48.862795] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.530 [2024-12-07 01:57:48.862995] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:43.530 [2024-12-07 01:57:48.863062] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:43.530 [2024-12-07 01:57:48.863136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:43.530 [2024-12-07 01:57:48.866762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:13:43.530 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.530 01:57:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:43.530 [2024-12-07 01:57:48.868888] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.466 "name": "raid_bdev1", 00:13:44.466 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:44.466 "strip_size_kb": 64, 00:13:44.466 "state": "online", 00:13:44.466 "raid_level": "raid5f", 00:13:44.466 "superblock": true, 00:13:44.466 "num_base_bdevs": 3, 00:13:44.466 "num_base_bdevs_discovered": 3, 00:13:44.466 "num_base_bdevs_operational": 3, 00:13:44.466 "process": { 00:13:44.466 "type": "rebuild", 00:13:44.466 "target": "spare", 00:13:44.466 "progress": { 00:13:44.466 "blocks": 20480, 00:13:44.466 "percent": 16 00:13:44.466 } 00:13:44.466 }, 00:13:44.466 "base_bdevs_list": [ 00:13:44.466 { 00:13:44.466 "name": "spare", 00:13:44.466 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:44.466 "is_configured": true, 00:13:44.466 "data_offset": 2048, 00:13:44.466 "data_size": 63488 00:13:44.466 }, 00:13:44.466 { 00:13:44.466 "name": "BaseBdev2", 00:13:44.466 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:44.466 "is_configured": true, 00:13:44.466 "data_offset": 2048, 00:13:44.466 "data_size": 63488 00:13:44.466 }, 00:13:44.466 { 00:13:44.466 "name": "BaseBdev3", 00:13:44.466 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:44.466 "is_configured": true, 00:13:44.466 "data_offset": 2048, 00:13:44.466 "data_size": 63488 00:13:44.466 } 00:13:44.466 ] 00:13:44.466 }' 00:13:44.466 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.725 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.725 01:57:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.725 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.725 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.726 [2024-12-07 01:57:50.017622] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.726 [2024-12-07 01:57:50.075423] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:44.726 [2024-12-07 01:57:50.075473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:44.726 [2024-12-07 01:57:50.075501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:44.726 [2024-12-07 01:57:50.075509] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.726 "name": "raid_bdev1", 00:13:44.726 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:44.726 "strip_size_kb": 64, 00:13:44.726 "state": "online", 00:13:44.726 "raid_level": "raid5f", 00:13:44.726 "superblock": true, 00:13:44.726 "num_base_bdevs": 3, 00:13:44.726 "num_base_bdevs_discovered": 2, 00:13:44.726 "num_base_bdevs_operational": 2, 00:13:44.726 "base_bdevs_list": [ 00:13:44.726 { 00:13:44.726 "name": null, 00:13:44.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.726 "is_configured": false, 00:13:44.726 "data_offset": 0, 00:13:44.726 "data_size": 63488 00:13:44.726 }, 00:13:44.726 { 00:13:44.726 "name": "BaseBdev2", 00:13:44.726 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:44.726 "is_configured": true, 00:13:44.726 "data_offset": 2048, 00:13:44.726 "data_size": 63488 00:13:44.726 }, 00:13:44.726 { 00:13:44.726 "name": "BaseBdev3", 00:13:44.726 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:44.726 "is_configured": true, 00:13:44.726 "data_offset": 2048, 00:13:44.726 "data_size": 63488 00:13:44.726 } 00:13:44.726 ] 00:13:44.726 }' 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.726 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.292 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.292 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.292 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.292 [2024-12-07 01:57:50.531648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.292 [2024-12-07 01:57:50.531756] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.292 [2024-12-07 01:57:50.531807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:45.292 [2024-12-07 01:57:50.531839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.293 [2024-12-07 01:57:50.532290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.293 [2024-12-07 01:57:50.532349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.293 [2024-12-07 01:57:50.532463] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:45.293 [2024-12-07 01:57:50.532503] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:45.293 [2024-12-07 01:57:50.532544] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:45.293 [2024-12-07 01:57:50.532603] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:45.293 [2024-12-07 01:57:50.535973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:13:45.293 spare 00:13:45.293 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.293 01:57:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:45.293 [2024-12-07 01:57:50.538102] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:46.228 "name": "raid_bdev1", 00:13:46.228 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:46.228 "strip_size_kb": 64, 00:13:46.228 "state": "online", 00:13:46.228 "raid_level": "raid5f", 00:13:46.228 "superblock": true, 00:13:46.228 "num_base_bdevs": 3, 00:13:46.228 "num_base_bdevs_discovered": 3, 00:13:46.228 "num_base_bdevs_operational": 3, 00:13:46.228 "process": { 00:13:46.228 "type": "rebuild", 00:13:46.228 "target": "spare", 00:13:46.228 "progress": { 00:13:46.228 "blocks": 20480, 00:13:46.228 "percent": 16 00:13:46.228 } 00:13:46.228 }, 00:13:46.228 "base_bdevs_list": [ 00:13:46.228 { 00:13:46.228 "name": "spare", 00:13:46.228 "uuid": "6b0da6df-aa87-5388-a437-c83de22634e7", 00:13:46.228 "is_configured": true, 00:13:46.228 "data_offset": 2048, 00:13:46.228 "data_size": 63488 00:13:46.228 }, 00:13:46.228 { 00:13:46.228 "name": "BaseBdev2", 00:13:46.228 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:46.228 "is_configured": true, 00:13:46.228 "data_offset": 2048, 00:13:46.228 "data_size": 63488 00:13:46.228 }, 00:13:46.228 { 00:13:46.228 "name": "BaseBdev3", 00:13:46.228 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:46.228 "is_configured": true, 00:13:46.228 "data_offset": 2048, 00:13:46.228 "data_size": 63488 00:13:46.228 } 00:13:46.228 ] 00:13:46.228 }' 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:46.228 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.486 [2024-12-07 01:57:51.695244] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.486 [2024-12-07 01:57:51.744706] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:46.486 [2024-12-07 01:57:51.744823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.486 [2024-12-07 01:57:51.744841] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:46.486 [2024-12-07 01:57:51.744853] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.486 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.486 "name": "raid_bdev1", 00:13:46.486 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:46.486 "strip_size_kb": 64, 00:13:46.486 "state": "online", 00:13:46.486 "raid_level": "raid5f", 00:13:46.486 "superblock": true, 00:13:46.486 "num_base_bdevs": 3, 00:13:46.486 "num_base_bdevs_discovered": 2, 00:13:46.486 "num_base_bdevs_operational": 2, 00:13:46.486 "base_bdevs_list": [ 00:13:46.486 { 00:13:46.486 "name": null, 00:13:46.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.486 "is_configured": false, 00:13:46.486 "data_offset": 0, 00:13:46.486 "data_size": 63488 00:13:46.486 }, 00:13:46.486 { 00:13:46.486 "name": "BaseBdev2", 00:13:46.486 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:46.486 "is_configured": true, 00:13:46.487 "data_offset": 2048, 00:13:46.487 "data_size": 63488 00:13:46.487 }, 00:13:46.487 { 00:13:46.487 "name": "BaseBdev3", 00:13:46.487 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:46.487 "is_configured": true, 00:13:46.487 "data_offset": 2048, 00:13:46.487 "data_size": 63488 00:13:46.487 } 00:13:46.487 ] 00:13:46.487 }' 00:13:46.487 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.487 01:57:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.053 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.054 "name": "raid_bdev1", 00:13:47.054 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:47.054 "strip_size_kb": 64, 00:13:47.054 "state": "online", 00:13:47.054 "raid_level": "raid5f", 00:13:47.054 "superblock": true, 00:13:47.054 "num_base_bdevs": 3, 00:13:47.054 "num_base_bdevs_discovered": 2, 00:13:47.054 "num_base_bdevs_operational": 2, 00:13:47.054 "base_bdevs_list": [ 00:13:47.054 { 00:13:47.054 "name": null, 00:13:47.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.054 "is_configured": false, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 63488 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": "BaseBdev2", 00:13:47.054 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 2048, 00:13:47.054 "data_size": 63488 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": "BaseBdev3", 00:13:47.054 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 2048, 00:13:47.054 "data_size": 63488 00:13:47.054 } 00:13:47.054 ] 00:13:47.054 }' 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.054 [2024-12-07 01:57:52.376587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.054 [2024-12-07 01:57:52.376689] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.054 [2024-12-07 01:57:52.376719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:47.054 [2024-12-07 01:57:52.376733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.054 [2024-12-07 01:57:52.377147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.054 [2024-12-07 01:57:52.377165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.054 [2024-12-07 01:57:52.377231] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:47.054 [2024-12-07 01:57:52.377246] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:47.054 [2024-12-07 01:57:52.377253] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.054 [2024-12-07 01:57:52.377265] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:47.054 BaseBdev1 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.054 01:57:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.991 "name": "raid_bdev1", 00:13:47.991 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:47.991 "strip_size_kb": 64, 00:13:47.991 "state": "online", 00:13:47.991 "raid_level": "raid5f", 00:13:47.991 "superblock": true, 00:13:47.991 "num_base_bdevs": 3, 00:13:47.991 "num_base_bdevs_discovered": 2, 00:13:47.991 "num_base_bdevs_operational": 2, 00:13:47.991 "base_bdevs_list": [ 00:13:47.991 { 00:13:47.991 "name": null, 00:13:47.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.991 "is_configured": false, 00:13:47.991 "data_offset": 0, 00:13:47.991 "data_size": 63488 00:13:47.991 }, 00:13:47.991 { 00:13:47.991 "name": "BaseBdev2", 00:13:47.991 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:47.991 "is_configured": true, 00:13:47.991 "data_offset": 2048, 00:13:47.991 "data_size": 63488 00:13:47.991 }, 00:13:47.991 { 00:13:47.991 "name": "BaseBdev3", 00:13:47.991 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:47.991 "is_configured": true, 00:13:47.991 "data_offset": 2048, 00:13:47.991 "data_size": 63488 00:13:47.991 } 00:13:47.991 ] 00:13:47.991 }' 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.991 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.561 "name": "raid_bdev1", 00:13:48.561 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:48.561 "strip_size_kb": 64, 00:13:48.561 "state": "online", 00:13:48.561 "raid_level": "raid5f", 00:13:48.561 "superblock": true, 00:13:48.561 "num_base_bdevs": 3, 00:13:48.561 "num_base_bdevs_discovered": 2, 00:13:48.561 "num_base_bdevs_operational": 2, 00:13:48.561 "base_bdevs_list": [ 00:13:48.561 { 00:13:48.561 "name": null, 00:13:48.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.561 "is_configured": false, 00:13:48.561 "data_offset": 0, 00:13:48.561 "data_size": 63488 00:13:48.561 }, 00:13:48.561 { 00:13:48.561 "name": "BaseBdev2", 00:13:48.561 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:48.561 "is_configured": true, 00:13:48.561 "data_offset": 2048, 00:13:48.561 "data_size": 63488 00:13:48.561 }, 00:13:48.561 { 00:13:48.561 "name": "BaseBdev3", 00:13:48.561 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:48.561 "is_configured": true, 00:13:48.561 "data_offset": 2048, 00:13:48.561 "data_size": 63488 00:13:48.561 } 00:13:48.561 ] 00:13:48.561 }' 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.561 [2024-12-07 01:57:53.965930] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.561 [2024-12-07 01:57:53.966071] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:48.561 [2024-12-07 01:57:53.966086] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:48.561 request: 00:13:48.561 { 00:13:48.561 "base_bdev": "BaseBdev1", 00:13:48.561 "raid_bdev": "raid_bdev1", 00:13:48.561 "method": "bdev_raid_add_base_bdev", 00:13:48.561 "req_id": 1 00:13:48.561 } 00:13:48.561 Got JSON-RPC error response 00:13:48.561 response: 00:13:48.561 { 00:13:48.561 "code": -22, 00:13:48.561 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:48.561 } 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:48.561 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:48.562 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.562 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.562 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.562 01:57:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.936 01:57:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.936 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.936 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.936 "name": "raid_bdev1", 00:13:49.936 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:49.936 "strip_size_kb": 64, 00:13:49.936 "state": "online", 00:13:49.936 "raid_level": "raid5f", 00:13:49.936 "superblock": true, 00:13:49.936 "num_base_bdevs": 3, 00:13:49.936 "num_base_bdevs_discovered": 2, 00:13:49.936 "num_base_bdevs_operational": 2, 00:13:49.936 "base_bdevs_list": [ 00:13:49.936 { 00:13:49.936 "name": null, 00:13:49.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.936 "is_configured": false, 00:13:49.936 "data_offset": 0, 00:13:49.936 "data_size": 63488 00:13:49.936 }, 00:13:49.936 { 00:13:49.936 "name": "BaseBdev2", 00:13:49.936 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:49.936 "is_configured": true, 00:13:49.936 "data_offset": 2048, 00:13:49.936 "data_size": 63488 00:13:49.936 }, 00:13:49.936 { 00:13:49.936 "name": "BaseBdev3", 00:13:49.936 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:49.936 "is_configured": true, 00:13:49.936 "data_offset": 2048, 00:13:49.936 "data_size": 63488 00:13:49.936 } 00:13:49.936 ] 00:13:49.936 }' 00:13:49.936 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.936 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.195 "name": "raid_bdev1", 00:13:50.195 "uuid": "9ab6e950-b0b7-410f-ae82-faa497a756c8", 00:13:50.195 "strip_size_kb": 64, 00:13:50.195 "state": "online", 00:13:50.195 "raid_level": "raid5f", 00:13:50.195 "superblock": true, 00:13:50.195 "num_base_bdevs": 3, 00:13:50.195 "num_base_bdevs_discovered": 2, 00:13:50.195 "num_base_bdevs_operational": 2, 00:13:50.195 "base_bdevs_list": [ 00:13:50.195 { 00:13:50.195 "name": null, 00:13:50.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.195 "is_configured": false, 00:13:50.195 "data_offset": 0, 00:13:50.195 "data_size": 63488 00:13:50.195 }, 00:13:50.195 { 00:13:50.195 "name": "BaseBdev2", 00:13:50.195 "uuid": "01808710-3600-5193-99d2-ad8abce9d3e7", 00:13:50.195 "is_configured": true, 00:13:50.195 "data_offset": 2048, 00:13:50.195 "data_size": 63488 00:13:50.195 }, 00:13:50.195 { 00:13:50.195 "name": "BaseBdev3", 00:13:50.195 "uuid": "e85ff81c-9a65-5ffe-90dc-cefb48cde7f1", 00:13:50.195 "is_configured": true, 00:13:50.195 "data_offset": 2048, 00:13:50.195 "data_size": 63488 00:13:50.195 } 00:13:50.195 ] 00:13:50.195 }' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92211 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 92211 ']' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 92211 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92211 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:50.195 killing process with pid 92211 00:13:50.195 Received shutdown signal, test time was about 60.000000 seconds 00:13:50.195 00:13:50.195 Latency(us) 00:13:50.195 [2024-12-07T01:57:55.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.195 [2024-12-07T01:57:55.657Z] =================================================================================================================== 00:13:50.195 [2024-12-07T01:57:55.657Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92211' 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 92211 00:13:50.195 [2024-12-07 01:57:55.589275] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.195 [2024-12-07 01:57:55.589387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.195 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 92211 00:13:50.195 [2024-12-07 01:57:55.589453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.195 [2024-12-07 01:57:55.589462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:50.195 [2024-12-07 01:57:55.628902] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.455 ************************************ 00:13:50.455 END TEST raid5f_rebuild_test_sb 00:13:50.455 ************************************ 00:13:50.455 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:50.455 00:13:50.455 real 0m21.357s 00:13:50.455 user 0m27.755s 00:13:50.455 sys 0m2.665s 00:13:50.455 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:50.455 01:57:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.455 01:57:55 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:50.455 01:57:55 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:50.455 01:57:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:50.455 01:57:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:50.455 01:57:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.714 ************************************ 00:13:50.714 START TEST raid5f_state_function_test 00:13:50.714 ************************************ 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92941 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92941' 00:13:50.714 Process raid pid: 92941 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92941 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 92941 ']' 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:50.714 01:57:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.714 [2024-12-07 01:57:56.016614] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:50.715 [2024-12-07 01:57:56.016805] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.715 [2024-12-07 01:57:56.163045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.973 [2024-12-07 01:57:56.207447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.973 [2024-12-07 01:57:56.248089] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.973 [2024-12-07 01:57:56.248206] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.541 [2024-12-07 01:57:56.840862] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:51.541 [2024-12-07 01:57:56.840913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:51.541 [2024-12-07 01:57:56.840933] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:51.541 [2024-12-07 01:57:56.840943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:51.541 [2024-12-07 01:57:56.840949] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:51.541 [2024-12-07 01:57:56.840959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:51.541 [2024-12-07 01:57:56.840965] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:51.541 [2024-12-07 01:57:56.840973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.541 "name": "Existed_Raid", 00:13:51.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.541 "strip_size_kb": 64, 00:13:51.541 "state": "configuring", 00:13:51.541 "raid_level": "raid5f", 00:13:51.541 "superblock": false, 00:13:51.541 "num_base_bdevs": 4, 00:13:51.541 "num_base_bdevs_discovered": 0, 00:13:51.541 "num_base_bdevs_operational": 4, 00:13:51.541 "base_bdevs_list": [ 00:13:51.541 { 00:13:51.541 "name": "BaseBdev1", 00:13:51.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.541 "is_configured": false, 00:13:51.541 "data_offset": 0, 00:13:51.541 "data_size": 0 00:13:51.541 }, 00:13:51.541 { 00:13:51.541 "name": "BaseBdev2", 00:13:51.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.541 "is_configured": false, 00:13:51.541 "data_offset": 0, 00:13:51.541 "data_size": 0 00:13:51.541 }, 00:13:51.541 { 00:13:51.541 "name": "BaseBdev3", 00:13:51.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.541 "is_configured": false, 00:13:51.541 "data_offset": 0, 00:13:51.541 "data_size": 0 00:13:51.541 }, 00:13:51.541 { 00:13:51.541 "name": "BaseBdev4", 00:13:51.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.541 "is_configured": false, 00:13:51.541 "data_offset": 0, 00:13:51.541 "data_size": 0 00:13:51.541 } 00:13:51.541 ] 00:13:51.541 }' 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.541 01:57:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 [2024-12-07 01:57:57.319932] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.108 [2024-12-07 01:57:57.319979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 [2024-12-07 01:57:57.331931] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.108 [2024-12-07 01:57:57.331970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.108 [2024-12-07 01:57:57.331978] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.108 [2024-12-07 01:57:57.331986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.108 [2024-12-07 01:57:57.331992] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.108 [2024-12-07 01:57:57.332000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.108 [2024-12-07 01:57:57.332006] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:52.108 [2024-12-07 01:57:57.332014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 [2024-12-07 01:57:57.352372] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.108 BaseBdev1 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 [ 00:13:52.108 { 00:13:52.108 "name": "BaseBdev1", 00:13:52.108 "aliases": [ 00:13:52.108 "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b" 00:13:52.108 ], 00:13:52.108 "product_name": "Malloc disk", 00:13:52.108 "block_size": 512, 00:13:52.108 "num_blocks": 65536, 00:13:52.108 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:52.108 "assigned_rate_limits": { 00:13:52.108 "rw_ios_per_sec": 0, 00:13:52.108 "rw_mbytes_per_sec": 0, 00:13:52.108 "r_mbytes_per_sec": 0, 00:13:52.108 "w_mbytes_per_sec": 0 00:13:52.108 }, 00:13:52.108 "claimed": true, 00:13:52.108 "claim_type": "exclusive_write", 00:13:52.108 "zoned": false, 00:13:52.108 "supported_io_types": { 00:13:52.108 "read": true, 00:13:52.108 "write": true, 00:13:52.108 "unmap": true, 00:13:52.108 "flush": true, 00:13:52.108 "reset": true, 00:13:52.108 "nvme_admin": false, 00:13:52.108 "nvme_io": false, 00:13:52.108 "nvme_io_md": false, 00:13:52.108 "write_zeroes": true, 00:13:52.108 "zcopy": true, 00:13:52.108 "get_zone_info": false, 00:13:52.108 "zone_management": false, 00:13:52.108 "zone_append": false, 00:13:52.108 "compare": false, 00:13:52.108 "compare_and_write": false, 00:13:52.108 "abort": true, 00:13:52.108 "seek_hole": false, 00:13:52.108 "seek_data": false, 00:13:52.108 "copy": true, 00:13:52.108 "nvme_iov_md": false 00:13:52.108 }, 00:13:52.108 "memory_domains": [ 00:13:52.108 { 00:13:52.108 "dma_device_id": "system", 00:13:52.108 "dma_device_type": 1 00:13:52.108 }, 00:13:52.108 { 00:13:52.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.108 "dma_device_type": 2 00:13:52.108 } 00:13:52.108 ], 00:13:52.108 "driver_specific": {} 00:13:52.108 } 00:13:52.108 ] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.108 "name": "Existed_Raid", 00:13:52.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.108 "strip_size_kb": 64, 00:13:52.108 "state": "configuring", 00:13:52.108 "raid_level": "raid5f", 00:13:52.108 "superblock": false, 00:13:52.108 "num_base_bdevs": 4, 00:13:52.108 "num_base_bdevs_discovered": 1, 00:13:52.108 "num_base_bdevs_operational": 4, 00:13:52.108 "base_bdevs_list": [ 00:13:52.108 { 00:13:52.108 "name": "BaseBdev1", 00:13:52.108 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:52.108 "is_configured": true, 00:13:52.108 "data_offset": 0, 00:13:52.108 "data_size": 65536 00:13:52.108 }, 00:13:52.108 { 00:13:52.108 "name": "BaseBdev2", 00:13:52.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.108 "is_configured": false, 00:13:52.108 "data_offset": 0, 00:13:52.108 "data_size": 0 00:13:52.108 }, 00:13:52.108 { 00:13:52.108 "name": "BaseBdev3", 00:13:52.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.108 "is_configured": false, 00:13:52.108 "data_offset": 0, 00:13:52.108 "data_size": 0 00:13:52.108 }, 00:13:52.108 { 00:13:52.108 "name": "BaseBdev4", 00:13:52.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.108 "is_configured": false, 00:13:52.108 "data_offset": 0, 00:13:52.108 "data_size": 0 00:13:52.108 } 00:13:52.108 ] 00:13:52.108 }' 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.108 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.367 [2024-12-07 01:57:57.799625] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.367 [2024-12-07 01:57:57.799682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.367 [2024-12-07 01:57:57.811658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.367 [2024-12-07 01:57:57.813484] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.367 [2024-12-07 01:57:57.813523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.367 [2024-12-07 01:57:57.813531] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:52.367 [2024-12-07 01:57:57.813539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:52.367 [2024-12-07 01:57:57.813545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:52.367 [2024-12-07 01:57:57.813553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.367 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.368 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.626 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.626 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.626 "name": "Existed_Raid", 00:13:52.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.626 "strip_size_kb": 64, 00:13:52.626 "state": "configuring", 00:13:52.626 "raid_level": "raid5f", 00:13:52.626 "superblock": false, 00:13:52.626 "num_base_bdevs": 4, 00:13:52.626 "num_base_bdevs_discovered": 1, 00:13:52.626 "num_base_bdevs_operational": 4, 00:13:52.626 "base_bdevs_list": [ 00:13:52.626 { 00:13:52.627 "name": "BaseBdev1", 00:13:52.627 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:52.627 "is_configured": true, 00:13:52.627 "data_offset": 0, 00:13:52.627 "data_size": 65536 00:13:52.627 }, 00:13:52.627 { 00:13:52.627 "name": "BaseBdev2", 00:13:52.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.627 "is_configured": false, 00:13:52.627 "data_offset": 0, 00:13:52.627 "data_size": 0 00:13:52.627 }, 00:13:52.627 { 00:13:52.627 "name": "BaseBdev3", 00:13:52.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.627 "is_configured": false, 00:13:52.627 "data_offset": 0, 00:13:52.627 "data_size": 0 00:13:52.627 }, 00:13:52.627 { 00:13:52.627 "name": "BaseBdev4", 00:13:52.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.627 "is_configured": false, 00:13:52.627 "data_offset": 0, 00:13:52.627 "data_size": 0 00:13:52.627 } 00:13:52.627 ] 00:13:52.627 }' 00:13:52.627 01:57:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.627 01:57:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 [2024-12-07 01:57:58.270835] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.885 BaseBdev2 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.885 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 [ 00:13:52.885 { 00:13:52.885 "name": "BaseBdev2", 00:13:52.885 "aliases": [ 00:13:52.885 "34dcd809-1435-40a0-9c03-8b6ac8a450d5" 00:13:52.885 ], 00:13:52.885 "product_name": "Malloc disk", 00:13:52.885 "block_size": 512, 00:13:52.885 "num_blocks": 65536, 00:13:52.885 "uuid": "34dcd809-1435-40a0-9c03-8b6ac8a450d5", 00:13:52.885 "assigned_rate_limits": { 00:13:52.885 "rw_ios_per_sec": 0, 00:13:52.885 "rw_mbytes_per_sec": 0, 00:13:52.885 "r_mbytes_per_sec": 0, 00:13:52.885 "w_mbytes_per_sec": 0 00:13:52.885 }, 00:13:52.885 "claimed": true, 00:13:52.885 "claim_type": "exclusive_write", 00:13:52.885 "zoned": false, 00:13:52.885 "supported_io_types": { 00:13:52.885 "read": true, 00:13:52.885 "write": true, 00:13:52.885 "unmap": true, 00:13:52.885 "flush": true, 00:13:52.885 "reset": true, 00:13:52.885 "nvme_admin": false, 00:13:52.885 "nvme_io": false, 00:13:52.885 "nvme_io_md": false, 00:13:52.885 "write_zeroes": true, 00:13:52.885 "zcopy": true, 00:13:52.885 "get_zone_info": false, 00:13:52.885 "zone_management": false, 00:13:52.885 "zone_append": false, 00:13:52.885 "compare": false, 00:13:52.885 "compare_and_write": false, 00:13:52.885 "abort": true, 00:13:52.885 "seek_hole": false, 00:13:52.885 "seek_data": false, 00:13:52.885 "copy": true, 00:13:52.885 "nvme_iov_md": false 00:13:52.885 }, 00:13:52.885 "memory_domains": [ 00:13:52.886 { 00:13:52.886 "dma_device_id": "system", 00:13:52.886 "dma_device_type": 1 00:13:52.886 }, 00:13:52.886 { 00:13:52.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.886 "dma_device_type": 2 00:13:52.886 } 00:13:52.886 ], 00:13:52.886 "driver_specific": {} 00:13:52.886 } 00:13:52.886 ] 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.886 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.144 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.144 "name": "Existed_Raid", 00:13:53.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.144 "strip_size_kb": 64, 00:13:53.144 "state": "configuring", 00:13:53.144 "raid_level": "raid5f", 00:13:53.144 "superblock": false, 00:13:53.144 "num_base_bdevs": 4, 00:13:53.144 "num_base_bdevs_discovered": 2, 00:13:53.144 "num_base_bdevs_operational": 4, 00:13:53.144 "base_bdevs_list": [ 00:13:53.144 { 00:13:53.144 "name": "BaseBdev1", 00:13:53.144 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:53.144 "is_configured": true, 00:13:53.144 "data_offset": 0, 00:13:53.144 "data_size": 65536 00:13:53.144 }, 00:13:53.144 { 00:13:53.144 "name": "BaseBdev2", 00:13:53.144 "uuid": "34dcd809-1435-40a0-9c03-8b6ac8a450d5", 00:13:53.144 "is_configured": true, 00:13:53.144 "data_offset": 0, 00:13:53.144 "data_size": 65536 00:13:53.144 }, 00:13:53.144 { 00:13:53.144 "name": "BaseBdev3", 00:13:53.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.144 "is_configured": false, 00:13:53.144 "data_offset": 0, 00:13:53.144 "data_size": 0 00:13:53.144 }, 00:13:53.144 { 00:13:53.144 "name": "BaseBdev4", 00:13:53.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.144 "is_configured": false, 00:13:53.144 "data_offset": 0, 00:13:53.144 "data_size": 0 00:13:53.144 } 00:13:53.144 ] 00:13:53.144 }' 00:13:53.144 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.144 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.404 [2024-12-07 01:57:58.744832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:53.404 BaseBdev3 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.404 [ 00:13:53.404 { 00:13:53.404 "name": "BaseBdev3", 00:13:53.404 "aliases": [ 00:13:53.404 "e0fa535c-f94e-4af5-97ff-a44ade7756e2" 00:13:53.404 ], 00:13:53.404 "product_name": "Malloc disk", 00:13:53.404 "block_size": 512, 00:13:53.404 "num_blocks": 65536, 00:13:53.404 "uuid": "e0fa535c-f94e-4af5-97ff-a44ade7756e2", 00:13:53.404 "assigned_rate_limits": { 00:13:53.404 "rw_ios_per_sec": 0, 00:13:53.404 "rw_mbytes_per_sec": 0, 00:13:53.404 "r_mbytes_per_sec": 0, 00:13:53.404 "w_mbytes_per_sec": 0 00:13:53.404 }, 00:13:53.404 "claimed": true, 00:13:53.404 "claim_type": "exclusive_write", 00:13:53.404 "zoned": false, 00:13:53.404 "supported_io_types": { 00:13:53.404 "read": true, 00:13:53.404 "write": true, 00:13:53.404 "unmap": true, 00:13:53.404 "flush": true, 00:13:53.404 "reset": true, 00:13:53.404 "nvme_admin": false, 00:13:53.404 "nvme_io": false, 00:13:53.404 "nvme_io_md": false, 00:13:53.404 "write_zeroes": true, 00:13:53.404 "zcopy": true, 00:13:53.404 "get_zone_info": false, 00:13:53.404 "zone_management": false, 00:13:53.404 "zone_append": false, 00:13:53.404 "compare": false, 00:13:53.404 "compare_and_write": false, 00:13:53.404 "abort": true, 00:13:53.404 "seek_hole": false, 00:13:53.404 "seek_data": false, 00:13:53.404 "copy": true, 00:13:53.404 "nvme_iov_md": false 00:13:53.404 }, 00:13:53.404 "memory_domains": [ 00:13:53.404 { 00:13:53.404 "dma_device_id": "system", 00:13:53.404 "dma_device_type": 1 00:13:53.404 }, 00:13:53.404 { 00:13:53.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.404 "dma_device_type": 2 00:13:53.404 } 00:13:53.404 ], 00:13:53.404 "driver_specific": {} 00:13:53.404 } 00:13:53.404 ] 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.404 "name": "Existed_Raid", 00:13:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.404 "strip_size_kb": 64, 00:13:53.404 "state": "configuring", 00:13:53.404 "raid_level": "raid5f", 00:13:53.404 "superblock": false, 00:13:53.404 "num_base_bdevs": 4, 00:13:53.404 "num_base_bdevs_discovered": 3, 00:13:53.404 "num_base_bdevs_operational": 4, 00:13:53.404 "base_bdevs_list": [ 00:13:53.404 { 00:13:53.404 "name": "BaseBdev1", 00:13:53.404 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:53.404 "is_configured": true, 00:13:53.404 "data_offset": 0, 00:13:53.404 "data_size": 65536 00:13:53.404 }, 00:13:53.404 { 00:13:53.404 "name": "BaseBdev2", 00:13:53.404 "uuid": "34dcd809-1435-40a0-9c03-8b6ac8a450d5", 00:13:53.404 "is_configured": true, 00:13:53.404 "data_offset": 0, 00:13:53.404 "data_size": 65536 00:13:53.404 }, 00:13:53.404 { 00:13:53.404 "name": "BaseBdev3", 00:13:53.404 "uuid": "e0fa535c-f94e-4af5-97ff-a44ade7756e2", 00:13:53.404 "is_configured": true, 00:13:53.404 "data_offset": 0, 00:13:53.404 "data_size": 65536 00:13:53.404 }, 00:13:53.404 { 00:13:53.404 "name": "BaseBdev4", 00:13:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.404 "is_configured": false, 00:13:53.404 "data_offset": 0, 00:13:53.404 "data_size": 0 00:13:53.404 } 00:13:53.404 ] 00:13:53.404 }' 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.404 01:57:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.972 [2024-12-07 01:57:59.198857] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:53.972 [2024-12-07 01:57:59.198915] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:53.972 [2024-12-07 01:57:59.198922] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:53.972 [2024-12-07 01:57:59.199204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:53.972 [2024-12-07 01:57:59.199680] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:53.972 [2024-12-07 01:57:59.199711] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:53.972 [2024-12-07 01:57:59.199919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.972 BaseBdev4 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.972 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.973 [ 00:13:53.973 { 00:13:53.973 "name": "BaseBdev4", 00:13:53.973 "aliases": [ 00:13:53.973 "f56be88f-90bb-4dc7-ad17-65532d0725de" 00:13:53.973 ], 00:13:53.973 "product_name": "Malloc disk", 00:13:53.973 "block_size": 512, 00:13:53.973 "num_blocks": 65536, 00:13:53.973 "uuid": "f56be88f-90bb-4dc7-ad17-65532d0725de", 00:13:53.973 "assigned_rate_limits": { 00:13:53.973 "rw_ios_per_sec": 0, 00:13:53.973 "rw_mbytes_per_sec": 0, 00:13:53.973 "r_mbytes_per_sec": 0, 00:13:53.973 "w_mbytes_per_sec": 0 00:13:53.973 }, 00:13:53.973 "claimed": true, 00:13:53.973 "claim_type": "exclusive_write", 00:13:53.973 "zoned": false, 00:13:53.973 "supported_io_types": { 00:13:53.973 "read": true, 00:13:53.973 "write": true, 00:13:53.973 "unmap": true, 00:13:53.973 "flush": true, 00:13:53.973 "reset": true, 00:13:53.973 "nvme_admin": false, 00:13:53.973 "nvme_io": false, 00:13:53.973 "nvme_io_md": false, 00:13:53.973 "write_zeroes": true, 00:13:53.973 "zcopy": true, 00:13:53.973 "get_zone_info": false, 00:13:53.973 "zone_management": false, 00:13:53.973 "zone_append": false, 00:13:53.973 "compare": false, 00:13:53.973 "compare_and_write": false, 00:13:53.973 "abort": true, 00:13:53.973 "seek_hole": false, 00:13:53.973 "seek_data": false, 00:13:53.973 "copy": true, 00:13:53.973 "nvme_iov_md": false 00:13:53.973 }, 00:13:53.973 "memory_domains": [ 00:13:53.973 { 00:13:53.973 "dma_device_id": "system", 00:13:53.973 "dma_device_type": 1 00:13:53.973 }, 00:13:53.973 { 00:13:53.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:53.973 "dma_device_type": 2 00:13:53.973 } 00:13:53.973 ], 00:13:53.973 "driver_specific": {} 00:13:53.973 } 00:13:53.973 ] 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.973 "name": "Existed_Raid", 00:13:53.973 "uuid": "4007974a-541e-454c-86de-64c0903c2ebe", 00:13:53.973 "strip_size_kb": 64, 00:13:53.973 "state": "online", 00:13:53.973 "raid_level": "raid5f", 00:13:53.973 "superblock": false, 00:13:53.973 "num_base_bdevs": 4, 00:13:53.973 "num_base_bdevs_discovered": 4, 00:13:53.973 "num_base_bdevs_operational": 4, 00:13:53.973 "base_bdevs_list": [ 00:13:53.973 { 00:13:53.973 "name": "BaseBdev1", 00:13:53.973 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:53.973 "is_configured": true, 00:13:53.973 "data_offset": 0, 00:13:53.973 "data_size": 65536 00:13:53.973 }, 00:13:53.973 { 00:13:53.973 "name": "BaseBdev2", 00:13:53.973 "uuid": "34dcd809-1435-40a0-9c03-8b6ac8a450d5", 00:13:53.973 "is_configured": true, 00:13:53.973 "data_offset": 0, 00:13:53.973 "data_size": 65536 00:13:53.973 }, 00:13:53.973 { 00:13:53.973 "name": "BaseBdev3", 00:13:53.973 "uuid": "e0fa535c-f94e-4af5-97ff-a44ade7756e2", 00:13:53.973 "is_configured": true, 00:13:53.973 "data_offset": 0, 00:13:53.973 "data_size": 65536 00:13:53.973 }, 00:13:53.973 { 00:13:53.973 "name": "BaseBdev4", 00:13:53.973 "uuid": "f56be88f-90bb-4dc7-ad17-65532d0725de", 00:13:53.973 "is_configured": true, 00:13:53.973 "data_offset": 0, 00:13:53.973 "data_size": 65536 00:13:53.973 } 00:13:53.973 ] 00:13:53.973 }' 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.973 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:54.232 [2024-12-07 01:57:59.674241] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:54.232 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:54.491 "name": "Existed_Raid", 00:13:54.491 "aliases": [ 00:13:54.491 "4007974a-541e-454c-86de-64c0903c2ebe" 00:13:54.491 ], 00:13:54.491 "product_name": "Raid Volume", 00:13:54.491 "block_size": 512, 00:13:54.491 "num_blocks": 196608, 00:13:54.491 "uuid": "4007974a-541e-454c-86de-64c0903c2ebe", 00:13:54.491 "assigned_rate_limits": { 00:13:54.491 "rw_ios_per_sec": 0, 00:13:54.491 "rw_mbytes_per_sec": 0, 00:13:54.491 "r_mbytes_per_sec": 0, 00:13:54.491 "w_mbytes_per_sec": 0 00:13:54.491 }, 00:13:54.491 "claimed": false, 00:13:54.491 "zoned": false, 00:13:54.491 "supported_io_types": { 00:13:54.491 "read": true, 00:13:54.491 "write": true, 00:13:54.491 "unmap": false, 00:13:54.491 "flush": false, 00:13:54.491 "reset": true, 00:13:54.491 "nvme_admin": false, 00:13:54.491 "nvme_io": false, 00:13:54.491 "nvme_io_md": false, 00:13:54.491 "write_zeroes": true, 00:13:54.491 "zcopy": false, 00:13:54.491 "get_zone_info": false, 00:13:54.491 "zone_management": false, 00:13:54.491 "zone_append": false, 00:13:54.491 "compare": false, 00:13:54.491 "compare_and_write": false, 00:13:54.491 "abort": false, 00:13:54.491 "seek_hole": false, 00:13:54.491 "seek_data": false, 00:13:54.491 "copy": false, 00:13:54.491 "nvme_iov_md": false 00:13:54.491 }, 00:13:54.491 "driver_specific": { 00:13:54.491 "raid": { 00:13:54.491 "uuid": "4007974a-541e-454c-86de-64c0903c2ebe", 00:13:54.491 "strip_size_kb": 64, 00:13:54.491 "state": "online", 00:13:54.491 "raid_level": "raid5f", 00:13:54.491 "superblock": false, 00:13:54.491 "num_base_bdevs": 4, 00:13:54.491 "num_base_bdevs_discovered": 4, 00:13:54.491 "num_base_bdevs_operational": 4, 00:13:54.491 "base_bdevs_list": [ 00:13:54.491 { 00:13:54.491 "name": "BaseBdev1", 00:13:54.491 "uuid": "23db3fe1-89d6-47bb-8d1c-5eb25e6c0c0b", 00:13:54.491 "is_configured": true, 00:13:54.491 "data_offset": 0, 00:13:54.491 "data_size": 65536 00:13:54.491 }, 00:13:54.491 { 00:13:54.491 "name": "BaseBdev2", 00:13:54.491 "uuid": "34dcd809-1435-40a0-9c03-8b6ac8a450d5", 00:13:54.491 "is_configured": true, 00:13:54.491 "data_offset": 0, 00:13:54.491 "data_size": 65536 00:13:54.491 }, 00:13:54.491 { 00:13:54.491 "name": "BaseBdev3", 00:13:54.491 "uuid": "e0fa535c-f94e-4af5-97ff-a44ade7756e2", 00:13:54.491 "is_configured": true, 00:13:54.491 "data_offset": 0, 00:13:54.491 "data_size": 65536 00:13:54.491 }, 00:13:54.491 { 00:13:54.491 "name": "BaseBdev4", 00:13:54.491 "uuid": "f56be88f-90bb-4dc7-ad17-65532d0725de", 00:13:54.491 "is_configured": true, 00:13:54.491 "data_offset": 0, 00:13:54.491 "data_size": 65536 00:13:54.491 } 00:13:54.491 ] 00:13:54.491 } 00:13:54.491 } 00:13:54.491 }' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:54.491 BaseBdev2 00:13:54.491 BaseBdev3 00:13:54.491 BaseBdev4' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.491 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.750 [2024-12-07 01:57:59.977560] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.750 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.751 01:57:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.751 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.751 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.751 "name": "Existed_Raid", 00:13:54.751 "uuid": "4007974a-541e-454c-86de-64c0903c2ebe", 00:13:54.751 "strip_size_kb": 64, 00:13:54.751 "state": "online", 00:13:54.751 "raid_level": "raid5f", 00:13:54.751 "superblock": false, 00:13:54.751 "num_base_bdevs": 4, 00:13:54.751 "num_base_bdevs_discovered": 3, 00:13:54.751 "num_base_bdevs_operational": 3, 00:13:54.751 "base_bdevs_list": [ 00:13:54.751 { 00:13:54.751 "name": null, 00:13:54.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.751 "is_configured": false, 00:13:54.751 "data_offset": 0, 00:13:54.751 "data_size": 65536 00:13:54.751 }, 00:13:54.751 { 00:13:54.751 "name": "BaseBdev2", 00:13:54.751 "uuid": "34dcd809-1435-40a0-9c03-8b6ac8a450d5", 00:13:54.751 "is_configured": true, 00:13:54.751 "data_offset": 0, 00:13:54.751 "data_size": 65536 00:13:54.751 }, 00:13:54.751 { 00:13:54.751 "name": "BaseBdev3", 00:13:54.751 "uuid": "e0fa535c-f94e-4af5-97ff-a44ade7756e2", 00:13:54.751 "is_configured": true, 00:13:54.751 "data_offset": 0, 00:13:54.751 "data_size": 65536 00:13:54.751 }, 00:13:54.751 { 00:13:54.751 "name": "BaseBdev4", 00:13:54.751 "uuid": "f56be88f-90bb-4dc7-ad17-65532d0725de", 00:13:54.751 "is_configured": true, 00:13:54.751 "data_offset": 0, 00:13:54.751 "data_size": 65536 00:13:54.751 } 00:13:54.751 ] 00:13:54.751 }' 00:13:54.751 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.751 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.009 [2024-12-07 01:58:00.423940] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.009 [2024-12-07 01:58:00.424034] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.009 [2024-12-07 01:58:00.435061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:55.009 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 [2024-12-07 01:58:00.490987] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 [2024-12-07 01:58:00.561488] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:55.269 [2024-12-07 01:58:00.561533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 BaseBdev2 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 [ 00:13:55.269 { 00:13:55.269 "name": "BaseBdev2", 00:13:55.269 "aliases": [ 00:13:55.269 "f3acf447-7447-4374-b875-b85d2787a9df" 00:13:55.269 ], 00:13:55.269 "product_name": "Malloc disk", 00:13:55.269 "block_size": 512, 00:13:55.269 "num_blocks": 65536, 00:13:55.269 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:55.269 "assigned_rate_limits": { 00:13:55.269 "rw_ios_per_sec": 0, 00:13:55.269 "rw_mbytes_per_sec": 0, 00:13:55.269 "r_mbytes_per_sec": 0, 00:13:55.269 "w_mbytes_per_sec": 0 00:13:55.269 }, 00:13:55.269 "claimed": false, 00:13:55.269 "zoned": false, 00:13:55.269 "supported_io_types": { 00:13:55.269 "read": true, 00:13:55.269 "write": true, 00:13:55.269 "unmap": true, 00:13:55.269 "flush": true, 00:13:55.269 "reset": true, 00:13:55.269 "nvme_admin": false, 00:13:55.269 "nvme_io": false, 00:13:55.269 "nvme_io_md": false, 00:13:55.269 "write_zeroes": true, 00:13:55.269 "zcopy": true, 00:13:55.269 "get_zone_info": false, 00:13:55.269 "zone_management": false, 00:13:55.269 "zone_append": false, 00:13:55.269 "compare": false, 00:13:55.269 "compare_and_write": false, 00:13:55.269 "abort": true, 00:13:55.269 "seek_hole": false, 00:13:55.269 "seek_data": false, 00:13:55.269 "copy": true, 00:13:55.269 "nvme_iov_md": false 00:13:55.269 }, 00:13:55.269 "memory_domains": [ 00:13:55.269 { 00:13:55.269 "dma_device_id": "system", 00:13:55.269 "dma_device_type": 1 00:13:55.269 }, 00:13:55.269 { 00:13:55.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.269 "dma_device_type": 2 00:13:55.269 } 00:13:55.269 ], 00:13:55.269 "driver_specific": {} 00:13:55.269 } 00:13:55.269 ] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.269 BaseBdev3 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.269 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.270 [ 00:13:55.270 { 00:13:55.270 "name": "BaseBdev3", 00:13:55.270 "aliases": [ 00:13:55.270 "770d6267-3f37-44e2-8a74-738a22897c49" 00:13:55.270 ], 00:13:55.270 "product_name": "Malloc disk", 00:13:55.270 "block_size": 512, 00:13:55.270 "num_blocks": 65536, 00:13:55.270 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:55.270 "assigned_rate_limits": { 00:13:55.270 "rw_ios_per_sec": 0, 00:13:55.270 "rw_mbytes_per_sec": 0, 00:13:55.270 "r_mbytes_per_sec": 0, 00:13:55.270 "w_mbytes_per_sec": 0 00:13:55.270 }, 00:13:55.270 "claimed": false, 00:13:55.270 "zoned": false, 00:13:55.270 "supported_io_types": { 00:13:55.270 "read": true, 00:13:55.270 "write": true, 00:13:55.270 "unmap": true, 00:13:55.270 "flush": true, 00:13:55.270 "reset": true, 00:13:55.270 "nvme_admin": false, 00:13:55.270 "nvme_io": false, 00:13:55.270 "nvme_io_md": false, 00:13:55.270 "write_zeroes": true, 00:13:55.270 "zcopy": true, 00:13:55.270 "get_zone_info": false, 00:13:55.270 "zone_management": false, 00:13:55.270 "zone_append": false, 00:13:55.270 "compare": false, 00:13:55.270 "compare_and_write": false, 00:13:55.270 "abort": true, 00:13:55.270 "seek_hole": false, 00:13:55.270 "seek_data": false, 00:13:55.270 "copy": true, 00:13:55.270 "nvme_iov_md": false 00:13:55.270 }, 00:13:55.270 "memory_domains": [ 00:13:55.270 { 00:13:55.270 "dma_device_id": "system", 00:13:55.270 "dma_device_type": 1 00:13:55.270 }, 00:13:55.270 { 00:13:55.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.270 "dma_device_type": 2 00:13:55.270 } 00:13:55.270 ], 00:13:55.270 "driver_specific": {} 00:13:55.270 } 00:13:55.270 ] 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.270 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 BaseBdev4 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 [ 00:13:55.530 { 00:13:55.530 "name": "BaseBdev4", 00:13:55.530 "aliases": [ 00:13:55.530 "0882957e-d288-4345-9630-06a47c2454c9" 00:13:55.530 ], 00:13:55.530 "product_name": "Malloc disk", 00:13:55.530 "block_size": 512, 00:13:55.530 "num_blocks": 65536, 00:13:55.530 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:55.530 "assigned_rate_limits": { 00:13:55.530 "rw_ios_per_sec": 0, 00:13:55.530 "rw_mbytes_per_sec": 0, 00:13:55.530 "r_mbytes_per_sec": 0, 00:13:55.530 "w_mbytes_per_sec": 0 00:13:55.530 }, 00:13:55.530 "claimed": false, 00:13:55.530 "zoned": false, 00:13:55.530 "supported_io_types": { 00:13:55.530 "read": true, 00:13:55.530 "write": true, 00:13:55.530 "unmap": true, 00:13:55.530 "flush": true, 00:13:55.530 "reset": true, 00:13:55.530 "nvme_admin": false, 00:13:55.530 "nvme_io": false, 00:13:55.530 "nvme_io_md": false, 00:13:55.530 "write_zeroes": true, 00:13:55.530 "zcopy": true, 00:13:55.530 "get_zone_info": false, 00:13:55.530 "zone_management": false, 00:13:55.530 "zone_append": false, 00:13:55.530 "compare": false, 00:13:55.530 "compare_and_write": false, 00:13:55.530 "abort": true, 00:13:55.530 "seek_hole": false, 00:13:55.530 "seek_data": false, 00:13:55.530 "copy": true, 00:13:55.530 "nvme_iov_md": false 00:13:55.530 }, 00:13:55.530 "memory_domains": [ 00:13:55.530 { 00:13:55.530 "dma_device_id": "system", 00:13:55.530 "dma_device_type": 1 00:13:55.530 }, 00:13:55.530 { 00:13:55.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.530 "dma_device_type": 2 00:13:55.530 } 00:13:55.530 ], 00:13:55.530 "driver_specific": {} 00:13:55.530 } 00:13:55.530 ] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 [2024-12-07 01:58:00.780708] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:55.530 [2024-12-07 01:58:00.780747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:55.530 [2024-12-07 01:58:00.780782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.530 [2024-12-07 01:58:00.782516] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.530 [2024-12-07 01:58:00.782564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.530 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.530 "name": "Existed_Raid", 00:13:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.530 "strip_size_kb": 64, 00:13:55.530 "state": "configuring", 00:13:55.530 "raid_level": "raid5f", 00:13:55.530 "superblock": false, 00:13:55.530 "num_base_bdevs": 4, 00:13:55.530 "num_base_bdevs_discovered": 3, 00:13:55.530 "num_base_bdevs_operational": 4, 00:13:55.530 "base_bdevs_list": [ 00:13:55.530 { 00:13:55.530 "name": "BaseBdev1", 00:13:55.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.530 "is_configured": false, 00:13:55.530 "data_offset": 0, 00:13:55.530 "data_size": 0 00:13:55.530 }, 00:13:55.530 { 00:13:55.530 "name": "BaseBdev2", 00:13:55.530 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:55.530 "is_configured": true, 00:13:55.530 "data_offset": 0, 00:13:55.530 "data_size": 65536 00:13:55.530 }, 00:13:55.530 { 00:13:55.530 "name": "BaseBdev3", 00:13:55.530 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:55.530 "is_configured": true, 00:13:55.530 "data_offset": 0, 00:13:55.530 "data_size": 65536 00:13:55.530 }, 00:13:55.530 { 00:13:55.530 "name": "BaseBdev4", 00:13:55.530 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:55.531 "is_configured": true, 00:13:55.531 "data_offset": 0, 00:13:55.531 "data_size": 65536 00:13:55.531 } 00:13:55.531 ] 00:13:55.531 }' 00:13:55.531 01:58:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.531 01:58:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.788 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.789 [2024-12-07 01:58:01.176003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.789 "name": "Existed_Raid", 00:13:55.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.789 "strip_size_kb": 64, 00:13:55.789 "state": "configuring", 00:13:55.789 "raid_level": "raid5f", 00:13:55.789 "superblock": false, 00:13:55.789 "num_base_bdevs": 4, 00:13:55.789 "num_base_bdevs_discovered": 2, 00:13:55.789 "num_base_bdevs_operational": 4, 00:13:55.789 "base_bdevs_list": [ 00:13:55.789 { 00:13:55.789 "name": "BaseBdev1", 00:13:55.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.789 "is_configured": false, 00:13:55.789 "data_offset": 0, 00:13:55.789 "data_size": 0 00:13:55.789 }, 00:13:55.789 { 00:13:55.789 "name": null, 00:13:55.789 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:55.789 "is_configured": false, 00:13:55.789 "data_offset": 0, 00:13:55.789 "data_size": 65536 00:13:55.789 }, 00:13:55.789 { 00:13:55.789 "name": "BaseBdev3", 00:13:55.789 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:55.789 "is_configured": true, 00:13:55.789 "data_offset": 0, 00:13:55.789 "data_size": 65536 00:13:55.789 }, 00:13:55.789 { 00:13:55.789 "name": "BaseBdev4", 00:13:55.789 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:55.789 "is_configured": true, 00:13:55.789 "data_offset": 0, 00:13:55.789 "data_size": 65536 00:13:55.789 } 00:13:55.789 ] 00:13:55.789 }' 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.789 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 [2024-12-07 01:58:01.685815] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.354 BaseBdev1 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 [ 00:13:56.354 { 00:13:56.354 "name": "BaseBdev1", 00:13:56.354 "aliases": [ 00:13:56.354 "e85d5307-de74-42a0-b14c-5e3c9c32ff7f" 00:13:56.354 ], 00:13:56.354 "product_name": "Malloc disk", 00:13:56.354 "block_size": 512, 00:13:56.354 "num_blocks": 65536, 00:13:56.354 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:56.354 "assigned_rate_limits": { 00:13:56.354 "rw_ios_per_sec": 0, 00:13:56.354 "rw_mbytes_per_sec": 0, 00:13:56.354 "r_mbytes_per_sec": 0, 00:13:56.354 "w_mbytes_per_sec": 0 00:13:56.354 }, 00:13:56.354 "claimed": true, 00:13:56.354 "claim_type": "exclusive_write", 00:13:56.354 "zoned": false, 00:13:56.354 "supported_io_types": { 00:13:56.354 "read": true, 00:13:56.354 "write": true, 00:13:56.354 "unmap": true, 00:13:56.354 "flush": true, 00:13:56.354 "reset": true, 00:13:56.354 "nvme_admin": false, 00:13:56.354 "nvme_io": false, 00:13:56.354 "nvme_io_md": false, 00:13:56.354 "write_zeroes": true, 00:13:56.354 "zcopy": true, 00:13:56.354 "get_zone_info": false, 00:13:56.354 "zone_management": false, 00:13:56.354 "zone_append": false, 00:13:56.354 "compare": false, 00:13:56.354 "compare_and_write": false, 00:13:56.354 "abort": true, 00:13:56.354 "seek_hole": false, 00:13:56.354 "seek_data": false, 00:13:56.354 "copy": true, 00:13:56.354 "nvme_iov_md": false 00:13:56.354 }, 00:13:56.354 "memory_domains": [ 00:13:56.354 { 00:13:56.354 "dma_device_id": "system", 00:13:56.354 "dma_device_type": 1 00:13:56.354 }, 00:13:56.354 { 00:13:56.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.354 "dma_device_type": 2 00:13:56.354 } 00:13:56.354 ], 00:13:56.354 "driver_specific": {} 00:13:56.354 } 00:13:56.354 ] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.354 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.354 "name": "Existed_Raid", 00:13:56.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.354 "strip_size_kb": 64, 00:13:56.354 "state": "configuring", 00:13:56.354 "raid_level": "raid5f", 00:13:56.354 "superblock": false, 00:13:56.354 "num_base_bdevs": 4, 00:13:56.354 "num_base_bdevs_discovered": 3, 00:13:56.354 "num_base_bdevs_operational": 4, 00:13:56.354 "base_bdevs_list": [ 00:13:56.354 { 00:13:56.354 "name": "BaseBdev1", 00:13:56.354 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:56.354 "is_configured": true, 00:13:56.354 "data_offset": 0, 00:13:56.354 "data_size": 65536 00:13:56.354 }, 00:13:56.354 { 00:13:56.354 "name": null, 00:13:56.354 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:56.354 "is_configured": false, 00:13:56.354 "data_offset": 0, 00:13:56.354 "data_size": 65536 00:13:56.354 }, 00:13:56.354 { 00:13:56.354 "name": "BaseBdev3", 00:13:56.354 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:56.354 "is_configured": true, 00:13:56.354 "data_offset": 0, 00:13:56.354 "data_size": 65536 00:13:56.354 }, 00:13:56.354 { 00:13:56.354 "name": "BaseBdev4", 00:13:56.354 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:56.354 "is_configured": true, 00:13:56.354 "data_offset": 0, 00:13:56.354 "data_size": 65536 00:13:56.354 } 00:13:56.354 ] 00:13:56.354 }' 00:13:56.355 01:58:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.355 01:58:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 [2024-12-07 01:58:02.208959] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.923 "name": "Existed_Raid", 00:13:56.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.923 "strip_size_kb": 64, 00:13:56.923 "state": "configuring", 00:13:56.923 "raid_level": "raid5f", 00:13:56.923 "superblock": false, 00:13:56.923 "num_base_bdevs": 4, 00:13:56.923 "num_base_bdevs_discovered": 2, 00:13:56.923 "num_base_bdevs_operational": 4, 00:13:56.923 "base_bdevs_list": [ 00:13:56.923 { 00:13:56.923 "name": "BaseBdev1", 00:13:56.923 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:56.923 "is_configured": true, 00:13:56.923 "data_offset": 0, 00:13:56.923 "data_size": 65536 00:13:56.923 }, 00:13:56.923 { 00:13:56.923 "name": null, 00:13:56.923 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:56.923 "is_configured": false, 00:13:56.923 "data_offset": 0, 00:13:56.923 "data_size": 65536 00:13:56.923 }, 00:13:56.923 { 00:13:56.923 "name": null, 00:13:56.923 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:56.923 "is_configured": false, 00:13:56.923 "data_offset": 0, 00:13:56.923 "data_size": 65536 00:13:56.923 }, 00:13:56.923 { 00:13:56.923 "name": "BaseBdev4", 00:13:56.923 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:56.923 "is_configured": true, 00:13:56.923 "data_offset": 0, 00:13:56.923 "data_size": 65536 00:13:56.923 } 00:13:56.923 ] 00:13:56.923 }' 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.923 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.189 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.189 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.189 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.189 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.189 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.465 [2024-12-07 01:58:02.664224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.465 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.466 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.466 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.466 "name": "Existed_Raid", 00:13:57.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.466 "strip_size_kb": 64, 00:13:57.466 "state": "configuring", 00:13:57.466 "raid_level": "raid5f", 00:13:57.466 "superblock": false, 00:13:57.466 "num_base_bdevs": 4, 00:13:57.466 "num_base_bdevs_discovered": 3, 00:13:57.466 "num_base_bdevs_operational": 4, 00:13:57.466 "base_bdevs_list": [ 00:13:57.466 { 00:13:57.466 "name": "BaseBdev1", 00:13:57.466 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:57.466 "is_configured": true, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 }, 00:13:57.466 { 00:13:57.466 "name": null, 00:13:57.466 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:57.466 "is_configured": false, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 }, 00:13:57.466 { 00:13:57.466 "name": "BaseBdev3", 00:13:57.466 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:57.466 "is_configured": true, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 }, 00:13:57.466 { 00:13:57.466 "name": "BaseBdev4", 00:13:57.466 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:57.466 "is_configured": true, 00:13:57.466 "data_offset": 0, 00:13:57.466 "data_size": 65536 00:13:57.466 } 00:13:57.466 ] 00:13:57.466 }' 00:13:57.466 01:58:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.466 01:58:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.824 [2024-12-07 01:58:03.143430] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.824 "name": "Existed_Raid", 00:13:57.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.824 "strip_size_kb": 64, 00:13:57.824 "state": "configuring", 00:13:57.824 "raid_level": "raid5f", 00:13:57.824 "superblock": false, 00:13:57.824 "num_base_bdevs": 4, 00:13:57.824 "num_base_bdevs_discovered": 2, 00:13:57.824 "num_base_bdevs_operational": 4, 00:13:57.824 "base_bdevs_list": [ 00:13:57.824 { 00:13:57.824 "name": null, 00:13:57.824 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:57.824 "is_configured": false, 00:13:57.824 "data_offset": 0, 00:13:57.824 "data_size": 65536 00:13:57.824 }, 00:13:57.824 { 00:13:57.824 "name": null, 00:13:57.824 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:57.824 "is_configured": false, 00:13:57.824 "data_offset": 0, 00:13:57.824 "data_size": 65536 00:13:57.824 }, 00:13:57.824 { 00:13:57.824 "name": "BaseBdev3", 00:13:57.824 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:57.824 "is_configured": true, 00:13:57.824 "data_offset": 0, 00:13:57.824 "data_size": 65536 00:13:57.824 }, 00:13:57.824 { 00:13:57.824 "name": "BaseBdev4", 00:13:57.824 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:57.824 "is_configured": true, 00:13:57.824 "data_offset": 0, 00:13:57.824 "data_size": 65536 00:13:57.824 } 00:13:57.824 ] 00:13:57.824 }' 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.824 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.388 [2024-12-07 01:58:03.648992] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.388 "name": "Existed_Raid", 00:13:58.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.388 "strip_size_kb": 64, 00:13:58.388 "state": "configuring", 00:13:58.388 "raid_level": "raid5f", 00:13:58.388 "superblock": false, 00:13:58.388 "num_base_bdevs": 4, 00:13:58.388 "num_base_bdevs_discovered": 3, 00:13:58.388 "num_base_bdevs_operational": 4, 00:13:58.388 "base_bdevs_list": [ 00:13:58.388 { 00:13:58.388 "name": null, 00:13:58.388 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:58.388 "is_configured": false, 00:13:58.388 "data_offset": 0, 00:13:58.388 "data_size": 65536 00:13:58.388 }, 00:13:58.388 { 00:13:58.388 "name": "BaseBdev2", 00:13:58.388 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:58.388 "is_configured": true, 00:13:58.388 "data_offset": 0, 00:13:58.388 "data_size": 65536 00:13:58.388 }, 00:13:58.388 { 00:13:58.388 "name": "BaseBdev3", 00:13:58.388 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:58.388 "is_configured": true, 00:13:58.388 "data_offset": 0, 00:13:58.388 "data_size": 65536 00:13:58.388 }, 00:13:58.388 { 00:13:58.388 "name": "BaseBdev4", 00:13:58.388 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:58.388 "is_configured": true, 00:13:58.388 "data_offset": 0, 00:13:58.388 "data_size": 65536 00:13:58.388 } 00:13:58.388 ] 00:13:58.388 }' 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.388 01:58:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:58.645 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e85d5307-de74-42a0-b14c-5e3c9c32ff7f 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 [2024-12-07 01:58:04.138842] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:58.903 [2024-12-07 01:58:04.138892] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:58.903 [2024-12-07 01:58:04.138899] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:58.903 [2024-12-07 01:58:04.139182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:58.903 [2024-12-07 01:58:04.139612] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:58.903 [2024-12-07 01:58:04.139633] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:58.903 [2024-12-07 01:58:04.139821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.903 NewBaseBdev 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 [ 00:13:58.903 { 00:13:58.903 "name": "NewBaseBdev", 00:13:58.903 "aliases": [ 00:13:58.903 "e85d5307-de74-42a0-b14c-5e3c9c32ff7f" 00:13:58.903 ], 00:13:58.903 "product_name": "Malloc disk", 00:13:58.903 "block_size": 512, 00:13:58.903 "num_blocks": 65536, 00:13:58.903 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:58.903 "assigned_rate_limits": { 00:13:58.903 "rw_ios_per_sec": 0, 00:13:58.903 "rw_mbytes_per_sec": 0, 00:13:58.903 "r_mbytes_per_sec": 0, 00:13:58.903 "w_mbytes_per_sec": 0 00:13:58.903 }, 00:13:58.903 "claimed": true, 00:13:58.903 "claim_type": "exclusive_write", 00:13:58.903 "zoned": false, 00:13:58.903 "supported_io_types": { 00:13:58.903 "read": true, 00:13:58.903 "write": true, 00:13:58.903 "unmap": true, 00:13:58.903 "flush": true, 00:13:58.903 "reset": true, 00:13:58.903 "nvme_admin": false, 00:13:58.903 "nvme_io": false, 00:13:58.903 "nvme_io_md": false, 00:13:58.903 "write_zeroes": true, 00:13:58.903 "zcopy": true, 00:13:58.903 "get_zone_info": false, 00:13:58.903 "zone_management": false, 00:13:58.903 "zone_append": false, 00:13:58.903 "compare": false, 00:13:58.903 "compare_and_write": false, 00:13:58.903 "abort": true, 00:13:58.903 "seek_hole": false, 00:13:58.903 "seek_data": false, 00:13:58.903 "copy": true, 00:13:58.903 "nvme_iov_md": false 00:13:58.903 }, 00:13:58.903 "memory_domains": [ 00:13:58.903 { 00:13:58.903 "dma_device_id": "system", 00:13:58.903 "dma_device_type": 1 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.903 "dma_device_type": 2 00:13:58.903 } 00:13:58.903 ], 00:13:58.903 "driver_specific": {} 00:13:58.903 } 00:13:58.903 ] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.903 "name": "Existed_Raid", 00:13:58.903 "uuid": "bfa0b60d-81c1-46c2-83c3-f2c189ab6abd", 00:13:58.903 "strip_size_kb": 64, 00:13:58.903 "state": "online", 00:13:58.903 "raid_level": "raid5f", 00:13:58.903 "superblock": false, 00:13:58.903 "num_base_bdevs": 4, 00:13:58.903 "num_base_bdevs_discovered": 4, 00:13:58.903 "num_base_bdevs_operational": 4, 00:13:58.903 "base_bdevs_list": [ 00:13:58.903 { 00:13:58.903 "name": "NewBaseBdev", 00:13:58.903 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 0, 00:13:58.903 "data_size": 65536 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "name": "BaseBdev2", 00:13:58.903 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 0, 00:13:58.903 "data_size": 65536 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "name": "BaseBdev3", 00:13:58.903 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 0, 00:13:58.903 "data_size": 65536 00:13:58.903 }, 00:13:58.903 { 00:13:58.903 "name": "BaseBdev4", 00:13:58.903 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:58.903 "is_configured": true, 00:13:58.903 "data_offset": 0, 00:13:58.903 "data_size": 65536 00:13:58.903 } 00:13:58.903 ] 00:13:58.903 }' 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.903 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.467 [2024-12-07 01:58:04.630209] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.467 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:59.467 "name": "Existed_Raid", 00:13:59.467 "aliases": [ 00:13:59.467 "bfa0b60d-81c1-46c2-83c3-f2c189ab6abd" 00:13:59.467 ], 00:13:59.467 "product_name": "Raid Volume", 00:13:59.467 "block_size": 512, 00:13:59.467 "num_blocks": 196608, 00:13:59.467 "uuid": "bfa0b60d-81c1-46c2-83c3-f2c189ab6abd", 00:13:59.467 "assigned_rate_limits": { 00:13:59.467 "rw_ios_per_sec": 0, 00:13:59.468 "rw_mbytes_per_sec": 0, 00:13:59.468 "r_mbytes_per_sec": 0, 00:13:59.468 "w_mbytes_per_sec": 0 00:13:59.468 }, 00:13:59.468 "claimed": false, 00:13:59.468 "zoned": false, 00:13:59.468 "supported_io_types": { 00:13:59.468 "read": true, 00:13:59.468 "write": true, 00:13:59.468 "unmap": false, 00:13:59.468 "flush": false, 00:13:59.468 "reset": true, 00:13:59.468 "nvme_admin": false, 00:13:59.468 "nvme_io": false, 00:13:59.468 "nvme_io_md": false, 00:13:59.468 "write_zeroes": true, 00:13:59.468 "zcopy": false, 00:13:59.468 "get_zone_info": false, 00:13:59.468 "zone_management": false, 00:13:59.468 "zone_append": false, 00:13:59.468 "compare": false, 00:13:59.468 "compare_and_write": false, 00:13:59.468 "abort": false, 00:13:59.468 "seek_hole": false, 00:13:59.468 "seek_data": false, 00:13:59.468 "copy": false, 00:13:59.468 "nvme_iov_md": false 00:13:59.468 }, 00:13:59.468 "driver_specific": { 00:13:59.468 "raid": { 00:13:59.468 "uuid": "bfa0b60d-81c1-46c2-83c3-f2c189ab6abd", 00:13:59.468 "strip_size_kb": 64, 00:13:59.468 "state": "online", 00:13:59.468 "raid_level": "raid5f", 00:13:59.468 "superblock": false, 00:13:59.468 "num_base_bdevs": 4, 00:13:59.468 "num_base_bdevs_discovered": 4, 00:13:59.468 "num_base_bdevs_operational": 4, 00:13:59.468 "base_bdevs_list": [ 00:13:59.468 { 00:13:59.468 "name": "NewBaseBdev", 00:13:59.468 "uuid": "e85d5307-de74-42a0-b14c-5e3c9c32ff7f", 00:13:59.468 "is_configured": true, 00:13:59.468 "data_offset": 0, 00:13:59.468 "data_size": 65536 00:13:59.468 }, 00:13:59.468 { 00:13:59.468 "name": "BaseBdev2", 00:13:59.468 "uuid": "f3acf447-7447-4374-b875-b85d2787a9df", 00:13:59.468 "is_configured": true, 00:13:59.468 "data_offset": 0, 00:13:59.468 "data_size": 65536 00:13:59.468 }, 00:13:59.468 { 00:13:59.468 "name": "BaseBdev3", 00:13:59.468 "uuid": "770d6267-3f37-44e2-8a74-738a22897c49", 00:13:59.468 "is_configured": true, 00:13:59.468 "data_offset": 0, 00:13:59.468 "data_size": 65536 00:13:59.468 }, 00:13:59.468 { 00:13:59.468 "name": "BaseBdev4", 00:13:59.468 "uuid": "0882957e-d288-4345-9630-06a47c2454c9", 00:13:59.468 "is_configured": true, 00:13:59.468 "data_offset": 0, 00:13:59.468 "data_size": 65536 00:13:59.468 } 00:13:59.468 ] 00:13:59.468 } 00:13:59.468 } 00:13:59.468 }' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:59.468 BaseBdev2 00:13:59.468 BaseBdev3 00:13:59.468 BaseBdev4' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.468 [2024-12-07 01:58:04.909525] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.468 [2024-12-07 01:58:04.909555] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.468 [2024-12-07 01:58:04.909618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.468 [2024-12-07 01:58:04.909871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.468 [2024-12-07 01:58:04.909885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92941 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 92941 ']' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 92941 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.468 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92941 00:13:59.728 killing process with pid 92941 00:13:59.728 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:59.728 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:59.728 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92941' 00:13:59.728 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 92941 00:13:59.728 [2024-12-07 01:58:04.952502] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.728 01:58:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 92941 00:13:59.728 [2024-12-07 01:58:04.992393] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:59.986 00:13:59.986 real 0m9.302s 00:13:59.986 user 0m15.827s 00:13:59.986 sys 0m2.000s 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.986 ************************************ 00:13:59.986 END TEST raid5f_state_function_test 00:13:59.986 ************************************ 00:13:59.986 01:58:05 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:59.986 01:58:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:59.986 01:58:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.986 01:58:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.986 ************************************ 00:13:59.986 START TEST raid5f_state_function_test_sb 00:13:59.986 ************************************ 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93591 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93591' 00:13:59.986 Process raid pid: 93591 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93591 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 93591 ']' 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:59.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:59.986 01:58:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.986 [2024-12-07 01:58:05.395071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:13:59.986 [2024-12-07 01:58:05.395201] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.245 [2024-12-07 01:58:05.541119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.245 [2024-12-07 01:58:05.584976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.245 [2024-12-07 01:58:05.626635] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.245 [2024-12-07 01:58:05.626684] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.811 [2024-12-07 01:58:06.215716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:00.811 [2024-12-07 01:58:06.215768] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:00.811 [2024-12-07 01:58:06.215781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:00.811 [2024-12-07 01:58:06.215791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:00.811 [2024-12-07 01:58:06.215797] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:00.811 [2024-12-07 01:58:06.215809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:00.811 [2024-12-07 01:58:06.215814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:00.811 [2024-12-07 01:58:06.215823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.811 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.812 "name": "Existed_Raid", 00:14:00.812 "uuid": "181e41bc-f708-4f9b-9b23-5c2f7dfbb2c4", 00:14:00.812 "strip_size_kb": 64, 00:14:00.812 "state": "configuring", 00:14:00.812 "raid_level": "raid5f", 00:14:00.812 "superblock": true, 00:14:00.812 "num_base_bdevs": 4, 00:14:00.812 "num_base_bdevs_discovered": 0, 00:14:00.812 "num_base_bdevs_operational": 4, 00:14:00.812 "base_bdevs_list": [ 00:14:00.812 { 00:14:00.812 "name": "BaseBdev1", 00:14:00.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.812 "is_configured": false, 00:14:00.812 "data_offset": 0, 00:14:00.812 "data_size": 0 00:14:00.812 }, 00:14:00.812 { 00:14:00.812 "name": "BaseBdev2", 00:14:00.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.812 "is_configured": false, 00:14:00.812 "data_offset": 0, 00:14:00.812 "data_size": 0 00:14:00.812 }, 00:14:00.812 { 00:14:00.812 "name": "BaseBdev3", 00:14:00.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.812 "is_configured": false, 00:14:00.812 "data_offset": 0, 00:14:00.812 "data_size": 0 00:14:00.812 }, 00:14:00.812 { 00:14:00.812 "name": "BaseBdev4", 00:14:00.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.812 "is_configured": false, 00:14:00.812 "data_offset": 0, 00:14:00.812 "data_size": 0 00:14:00.812 } 00:14:00.812 ] 00:14:00.812 }' 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.812 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.377 [2024-12-07 01:58:06.690738] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.377 [2024-12-07 01:58:06.690783] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.377 [2024-12-07 01:58:06.702756] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.377 [2024-12-07 01:58:06.702790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.377 [2024-12-07 01:58:06.702814] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.377 [2024-12-07 01:58:06.702823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.377 [2024-12-07 01:58:06.702829] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.377 [2024-12-07 01:58:06.702837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.377 [2024-12-07 01:58:06.702843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:01.377 [2024-12-07 01:58:06.702851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.377 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.377 [2024-12-07 01:58:06.723227] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.377 BaseBdev1 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.378 [ 00:14:01.378 { 00:14:01.378 "name": "BaseBdev1", 00:14:01.378 "aliases": [ 00:14:01.378 "f5cce086-4f58-4668-8a97-662ff6c0c922" 00:14:01.378 ], 00:14:01.378 "product_name": "Malloc disk", 00:14:01.378 "block_size": 512, 00:14:01.378 "num_blocks": 65536, 00:14:01.378 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:01.378 "assigned_rate_limits": { 00:14:01.378 "rw_ios_per_sec": 0, 00:14:01.378 "rw_mbytes_per_sec": 0, 00:14:01.378 "r_mbytes_per_sec": 0, 00:14:01.378 "w_mbytes_per_sec": 0 00:14:01.378 }, 00:14:01.378 "claimed": true, 00:14:01.378 "claim_type": "exclusive_write", 00:14:01.378 "zoned": false, 00:14:01.378 "supported_io_types": { 00:14:01.378 "read": true, 00:14:01.378 "write": true, 00:14:01.378 "unmap": true, 00:14:01.378 "flush": true, 00:14:01.378 "reset": true, 00:14:01.378 "nvme_admin": false, 00:14:01.378 "nvme_io": false, 00:14:01.378 "nvme_io_md": false, 00:14:01.378 "write_zeroes": true, 00:14:01.378 "zcopy": true, 00:14:01.378 "get_zone_info": false, 00:14:01.378 "zone_management": false, 00:14:01.378 "zone_append": false, 00:14:01.378 "compare": false, 00:14:01.378 "compare_and_write": false, 00:14:01.378 "abort": true, 00:14:01.378 "seek_hole": false, 00:14:01.378 "seek_data": false, 00:14:01.378 "copy": true, 00:14:01.378 "nvme_iov_md": false 00:14:01.378 }, 00:14:01.378 "memory_domains": [ 00:14:01.378 { 00:14:01.378 "dma_device_id": "system", 00:14:01.378 "dma_device_type": 1 00:14:01.378 }, 00:14:01.378 { 00:14:01.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.378 "dma_device_type": 2 00:14:01.378 } 00:14:01.378 ], 00:14:01.378 "driver_specific": {} 00:14:01.378 } 00:14:01.378 ] 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.378 "name": "Existed_Raid", 00:14:01.378 "uuid": "c25c6484-1972-425d-9ace-36ee541c59c7", 00:14:01.378 "strip_size_kb": 64, 00:14:01.378 "state": "configuring", 00:14:01.378 "raid_level": "raid5f", 00:14:01.378 "superblock": true, 00:14:01.378 "num_base_bdevs": 4, 00:14:01.378 "num_base_bdevs_discovered": 1, 00:14:01.378 "num_base_bdevs_operational": 4, 00:14:01.378 "base_bdevs_list": [ 00:14:01.378 { 00:14:01.378 "name": "BaseBdev1", 00:14:01.378 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:01.378 "is_configured": true, 00:14:01.378 "data_offset": 2048, 00:14:01.378 "data_size": 63488 00:14:01.378 }, 00:14:01.378 { 00:14:01.378 "name": "BaseBdev2", 00:14:01.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.378 "is_configured": false, 00:14:01.378 "data_offset": 0, 00:14:01.378 "data_size": 0 00:14:01.378 }, 00:14:01.378 { 00:14:01.378 "name": "BaseBdev3", 00:14:01.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.378 "is_configured": false, 00:14:01.378 "data_offset": 0, 00:14:01.378 "data_size": 0 00:14:01.378 }, 00:14:01.378 { 00:14:01.378 "name": "BaseBdev4", 00:14:01.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.378 "is_configured": false, 00:14:01.378 "data_offset": 0, 00:14:01.378 "data_size": 0 00:14:01.378 } 00:14:01.378 ] 00:14:01.378 }' 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.378 01:58:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.945 [2024-12-07 01:58:07.154506] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.945 [2024-12-07 01:58:07.154561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.945 [2024-12-07 01:58:07.166550] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.945 [2024-12-07 01:58:07.168343] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.945 [2024-12-07 01:58:07.168379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.945 [2024-12-07 01:58:07.168388] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:01.945 [2024-12-07 01:58:07.168397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:01.945 [2024-12-07 01:58:07.168403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:01.945 [2024-12-07 01:58:07.168411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.945 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.945 "name": "Existed_Raid", 00:14:01.945 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:01.945 "strip_size_kb": 64, 00:14:01.945 "state": "configuring", 00:14:01.945 "raid_level": "raid5f", 00:14:01.945 "superblock": true, 00:14:01.945 "num_base_bdevs": 4, 00:14:01.945 "num_base_bdevs_discovered": 1, 00:14:01.945 "num_base_bdevs_operational": 4, 00:14:01.945 "base_bdevs_list": [ 00:14:01.945 { 00:14:01.945 "name": "BaseBdev1", 00:14:01.945 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:01.945 "is_configured": true, 00:14:01.945 "data_offset": 2048, 00:14:01.945 "data_size": 63488 00:14:01.945 }, 00:14:01.945 { 00:14:01.945 "name": "BaseBdev2", 00:14:01.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.945 "is_configured": false, 00:14:01.945 "data_offset": 0, 00:14:01.945 "data_size": 0 00:14:01.945 }, 00:14:01.945 { 00:14:01.945 "name": "BaseBdev3", 00:14:01.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.946 "is_configured": false, 00:14:01.946 "data_offset": 0, 00:14:01.946 "data_size": 0 00:14:01.946 }, 00:14:01.946 { 00:14:01.946 "name": "BaseBdev4", 00:14:01.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.946 "is_configured": false, 00:14:01.946 "data_offset": 0, 00:14:01.946 "data_size": 0 00:14:01.946 } 00:14:01.946 ] 00:14:01.946 }' 00:14:01.946 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.946 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.205 [2024-12-07 01:58:07.630284] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.205 BaseBdev2 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.205 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.206 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.206 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:02.206 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.206 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.206 [ 00:14:02.206 { 00:14:02.206 "name": "BaseBdev2", 00:14:02.206 "aliases": [ 00:14:02.206 "df7f7438-d1ac-4162-b626-1f86cd1abe8f" 00:14:02.206 ], 00:14:02.206 "product_name": "Malloc disk", 00:14:02.206 "block_size": 512, 00:14:02.206 "num_blocks": 65536, 00:14:02.206 "uuid": "df7f7438-d1ac-4162-b626-1f86cd1abe8f", 00:14:02.206 "assigned_rate_limits": { 00:14:02.206 "rw_ios_per_sec": 0, 00:14:02.206 "rw_mbytes_per_sec": 0, 00:14:02.206 "r_mbytes_per_sec": 0, 00:14:02.206 "w_mbytes_per_sec": 0 00:14:02.206 }, 00:14:02.206 "claimed": true, 00:14:02.206 "claim_type": "exclusive_write", 00:14:02.206 "zoned": false, 00:14:02.206 "supported_io_types": { 00:14:02.206 "read": true, 00:14:02.206 "write": true, 00:14:02.206 "unmap": true, 00:14:02.206 "flush": true, 00:14:02.206 "reset": true, 00:14:02.206 "nvme_admin": false, 00:14:02.206 "nvme_io": false, 00:14:02.206 "nvme_io_md": false, 00:14:02.206 "write_zeroes": true, 00:14:02.206 "zcopy": true, 00:14:02.206 "get_zone_info": false, 00:14:02.206 "zone_management": false, 00:14:02.206 "zone_append": false, 00:14:02.206 "compare": false, 00:14:02.206 "compare_and_write": false, 00:14:02.206 "abort": true, 00:14:02.206 "seek_hole": false, 00:14:02.206 "seek_data": false, 00:14:02.206 "copy": true, 00:14:02.206 "nvme_iov_md": false 00:14:02.206 }, 00:14:02.206 "memory_domains": [ 00:14:02.206 { 00:14:02.206 "dma_device_id": "system", 00:14:02.206 "dma_device_type": 1 00:14:02.206 }, 00:14:02.206 { 00:14:02.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.206 "dma_device_type": 2 00:14:02.206 } 00:14:02.206 ], 00:14:02.206 "driver_specific": {} 00:14:02.206 } 00:14:02.206 ] 00:14:02.206 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.465 "name": "Existed_Raid", 00:14:02.465 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:02.465 "strip_size_kb": 64, 00:14:02.465 "state": "configuring", 00:14:02.465 "raid_level": "raid5f", 00:14:02.465 "superblock": true, 00:14:02.465 "num_base_bdevs": 4, 00:14:02.465 "num_base_bdevs_discovered": 2, 00:14:02.465 "num_base_bdevs_operational": 4, 00:14:02.465 "base_bdevs_list": [ 00:14:02.465 { 00:14:02.465 "name": "BaseBdev1", 00:14:02.465 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:02.465 "is_configured": true, 00:14:02.465 "data_offset": 2048, 00:14:02.465 "data_size": 63488 00:14:02.465 }, 00:14:02.465 { 00:14:02.465 "name": "BaseBdev2", 00:14:02.465 "uuid": "df7f7438-d1ac-4162-b626-1f86cd1abe8f", 00:14:02.465 "is_configured": true, 00:14:02.465 "data_offset": 2048, 00:14:02.465 "data_size": 63488 00:14:02.465 }, 00:14:02.465 { 00:14:02.465 "name": "BaseBdev3", 00:14:02.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.465 "is_configured": false, 00:14:02.465 "data_offset": 0, 00:14:02.465 "data_size": 0 00:14:02.465 }, 00:14:02.465 { 00:14:02.465 "name": "BaseBdev4", 00:14:02.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.465 "is_configured": false, 00:14:02.465 "data_offset": 0, 00:14:02.465 "data_size": 0 00:14:02.465 } 00:14:02.465 ] 00:14:02.465 }' 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.465 01:58:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.724 [2024-12-07 01:58:08.148114] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.724 BaseBdev3 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:02.724 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.725 [ 00:14:02.725 { 00:14:02.725 "name": "BaseBdev3", 00:14:02.725 "aliases": [ 00:14:02.725 "79bfc55d-c62f-4395-936c-c06b8b4dbb36" 00:14:02.725 ], 00:14:02.725 "product_name": "Malloc disk", 00:14:02.725 "block_size": 512, 00:14:02.725 "num_blocks": 65536, 00:14:02.725 "uuid": "79bfc55d-c62f-4395-936c-c06b8b4dbb36", 00:14:02.725 "assigned_rate_limits": { 00:14:02.725 "rw_ios_per_sec": 0, 00:14:02.725 "rw_mbytes_per_sec": 0, 00:14:02.725 "r_mbytes_per_sec": 0, 00:14:02.725 "w_mbytes_per_sec": 0 00:14:02.725 }, 00:14:02.725 "claimed": true, 00:14:02.725 "claim_type": "exclusive_write", 00:14:02.725 "zoned": false, 00:14:02.725 "supported_io_types": { 00:14:02.725 "read": true, 00:14:02.725 "write": true, 00:14:02.725 "unmap": true, 00:14:02.725 "flush": true, 00:14:02.725 "reset": true, 00:14:02.725 "nvme_admin": false, 00:14:02.725 "nvme_io": false, 00:14:02.725 "nvme_io_md": false, 00:14:02.725 "write_zeroes": true, 00:14:02.725 "zcopy": true, 00:14:02.725 "get_zone_info": false, 00:14:02.725 "zone_management": false, 00:14:02.725 "zone_append": false, 00:14:02.725 "compare": false, 00:14:02.725 "compare_and_write": false, 00:14:02.725 "abort": true, 00:14:02.725 "seek_hole": false, 00:14:02.725 "seek_data": false, 00:14:02.725 "copy": true, 00:14:02.725 "nvme_iov_md": false 00:14:02.725 }, 00:14:02.725 "memory_domains": [ 00:14:02.725 { 00:14:02.725 "dma_device_id": "system", 00:14:02.725 "dma_device_type": 1 00:14:02.725 }, 00:14:02.725 { 00:14:02.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.725 "dma_device_type": 2 00:14:02.725 } 00:14:02.725 ], 00:14:02.725 "driver_specific": {} 00:14:02.725 } 00:14:02.725 ] 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.725 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.984 "name": "Existed_Raid", 00:14:02.984 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:02.984 "strip_size_kb": 64, 00:14:02.984 "state": "configuring", 00:14:02.984 "raid_level": "raid5f", 00:14:02.984 "superblock": true, 00:14:02.984 "num_base_bdevs": 4, 00:14:02.984 "num_base_bdevs_discovered": 3, 00:14:02.984 "num_base_bdevs_operational": 4, 00:14:02.984 "base_bdevs_list": [ 00:14:02.984 { 00:14:02.984 "name": "BaseBdev1", 00:14:02.984 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 }, 00:14:02.984 { 00:14:02.984 "name": "BaseBdev2", 00:14:02.984 "uuid": "df7f7438-d1ac-4162-b626-1f86cd1abe8f", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 }, 00:14:02.984 { 00:14:02.984 "name": "BaseBdev3", 00:14:02.984 "uuid": "79bfc55d-c62f-4395-936c-c06b8b4dbb36", 00:14:02.984 "is_configured": true, 00:14:02.984 "data_offset": 2048, 00:14:02.984 "data_size": 63488 00:14:02.984 }, 00:14:02.984 { 00:14:02.984 "name": "BaseBdev4", 00:14:02.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.984 "is_configured": false, 00:14:02.984 "data_offset": 0, 00:14:02.984 "data_size": 0 00:14:02.984 } 00:14:02.984 ] 00:14:02.984 }' 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.984 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 [2024-12-07 01:58:08.646054] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:03.244 [2024-12-07 01:58:08.646337] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:03.244 [2024-12-07 01:58:08.646397] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:03.244 [2024-12-07 01:58:08.646681] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:03.244 BaseBdev4 00:14:03.244 [2024-12-07 01:58:08.647160] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:03.244 [2024-12-07 01:58:08.647220] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:03.244 [2024-12-07 01:58:08.647394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.244 [ 00:14:03.244 { 00:14:03.244 "name": "BaseBdev4", 00:14:03.244 "aliases": [ 00:14:03.244 "19d8e27d-426e-4053-833e-bca76de8ad43" 00:14:03.244 ], 00:14:03.244 "product_name": "Malloc disk", 00:14:03.244 "block_size": 512, 00:14:03.244 "num_blocks": 65536, 00:14:03.244 "uuid": "19d8e27d-426e-4053-833e-bca76de8ad43", 00:14:03.244 "assigned_rate_limits": { 00:14:03.244 "rw_ios_per_sec": 0, 00:14:03.244 "rw_mbytes_per_sec": 0, 00:14:03.244 "r_mbytes_per_sec": 0, 00:14:03.244 "w_mbytes_per_sec": 0 00:14:03.244 }, 00:14:03.244 "claimed": true, 00:14:03.244 "claim_type": "exclusive_write", 00:14:03.244 "zoned": false, 00:14:03.244 "supported_io_types": { 00:14:03.244 "read": true, 00:14:03.244 "write": true, 00:14:03.244 "unmap": true, 00:14:03.244 "flush": true, 00:14:03.244 "reset": true, 00:14:03.244 "nvme_admin": false, 00:14:03.244 "nvme_io": false, 00:14:03.244 "nvme_io_md": false, 00:14:03.244 "write_zeroes": true, 00:14:03.244 "zcopy": true, 00:14:03.244 "get_zone_info": false, 00:14:03.244 "zone_management": false, 00:14:03.244 "zone_append": false, 00:14:03.244 "compare": false, 00:14:03.244 "compare_and_write": false, 00:14:03.244 "abort": true, 00:14:03.244 "seek_hole": false, 00:14:03.244 "seek_data": false, 00:14:03.244 "copy": true, 00:14:03.244 "nvme_iov_md": false 00:14:03.244 }, 00:14:03.244 "memory_domains": [ 00:14:03.244 { 00:14:03.244 "dma_device_id": "system", 00:14:03.244 "dma_device_type": 1 00:14:03.244 }, 00:14:03.244 { 00:14:03.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.244 "dma_device_type": 2 00:14:03.244 } 00:14:03.244 ], 00:14:03.244 "driver_specific": {} 00:14:03.244 } 00:14:03.244 ] 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.244 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.245 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.504 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.504 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.504 "name": "Existed_Raid", 00:14:03.504 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:03.504 "strip_size_kb": 64, 00:14:03.504 "state": "online", 00:14:03.504 "raid_level": "raid5f", 00:14:03.504 "superblock": true, 00:14:03.504 "num_base_bdevs": 4, 00:14:03.504 "num_base_bdevs_discovered": 4, 00:14:03.504 "num_base_bdevs_operational": 4, 00:14:03.504 "base_bdevs_list": [ 00:14:03.504 { 00:14:03.504 "name": "BaseBdev1", 00:14:03.504 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:03.504 "is_configured": true, 00:14:03.504 "data_offset": 2048, 00:14:03.504 "data_size": 63488 00:14:03.504 }, 00:14:03.504 { 00:14:03.504 "name": "BaseBdev2", 00:14:03.504 "uuid": "df7f7438-d1ac-4162-b626-1f86cd1abe8f", 00:14:03.504 "is_configured": true, 00:14:03.504 "data_offset": 2048, 00:14:03.504 "data_size": 63488 00:14:03.504 }, 00:14:03.504 { 00:14:03.504 "name": "BaseBdev3", 00:14:03.504 "uuid": "79bfc55d-c62f-4395-936c-c06b8b4dbb36", 00:14:03.504 "is_configured": true, 00:14:03.504 "data_offset": 2048, 00:14:03.504 "data_size": 63488 00:14:03.504 }, 00:14:03.504 { 00:14:03.504 "name": "BaseBdev4", 00:14:03.504 "uuid": "19d8e27d-426e-4053-833e-bca76de8ad43", 00:14:03.504 "is_configured": true, 00:14:03.504 "data_offset": 2048, 00:14:03.504 "data_size": 63488 00:14:03.504 } 00:14:03.504 ] 00:14:03.504 }' 00:14:03.504 01:58:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.504 01:58:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.763 [2024-12-07 01:58:09.161404] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:03.763 "name": "Existed_Raid", 00:14:03.763 "aliases": [ 00:14:03.763 "bb3a95c3-2979-446d-91d3-36875795549c" 00:14:03.763 ], 00:14:03.763 "product_name": "Raid Volume", 00:14:03.763 "block_size": 512, 00:14:03.763 "num_blocks": 190464, 00:14:03.763 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:03.763 "assigned_rate_limits": { 00:14:03.763 "rw_ios_per_sec": 0, 00:14:03.763 "rw_mbytes_per_sec": 0, 00:14:03.763 "r_mbytes_per_sec": 0, 00:14:03.763 "w_mbytes_per_sec": 0 00:14:03.763 }, 00:14:03.763 "claimed": false, 00:14:03.763 "zoned": false, 00:14:03.763 "supported_io_types": { 00:14:03.763 "read": true, 00:14:03.763 "write": true, 00:14:03.763 "unmap": false, 00:14:03.763 "flush": false, 00:14:03.763 "reset": true, 00:14:03.763 "nvme_admin": false, 00:14:03.763 "nvme_io": false, 00:14:03.763 "nvme_io_md": false, 00:14:03.763 "write_zeroes": true, 00:14:03.763 "zcopy": false, 00:14:03.763 "get_zone_info": false, 00:14:03.763 "zone_management": false, 00:14:03.763 "zone_append": false, 00:14:03.763 "compare": false, 00:14:03.763 "compare_and_write": false, 00:14:03.763 "abort": false, 00:14:03.763 "seek_hole": false, 00:14:03.763 "seek_data": false, 00:14:03.763 "copy": false, 00:14:03.763 "nvme_iov_md": false 00:14:03.763 }, 00:14:03.763 "driver_specific": { 00:14:03.763 "raid": { 00:14:03.763 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:03.763 "strip_size_kb": 64, 00:14:03.763 "state": "online", 00:14:03.763 "raid_level": "raid5f", 00:14:03.763 "superblock": true, 00:14:03.763 "num_base_bdevs": 4, 00:14:03.763 "num_base_bdevs_discovered": 4, 00:14:03.763 "num_base_bdevs_operational": 4, 00:14:03.763 "base_bdevs_list": [ 00:14:03.763 { 00:14:03.763 "name": "BaseBdev1", 00:14:03.763 "uuid": "f5cce086-4f58-4668-8a97-662ff6c0c922", 00:14:03.763 "is_configured": true, 00:14:03.763 "data_offset": 2048, 00:14:03.763 "data_size": 63488 00:14:03.763 }, 00:14:03.763 { 00:14:03.763 "name": "BaseBdev2", 00:14:03.763 "uuid": "df7f7438-d1ac-4162-b626-1f86cd1abe8f", 00:14:03.763 "is_configured": true, 00:14:03.763 "data_offset": 2048, 00:14:03.763 "data_size": 63488 00:14:03.763 }, 00:14:03.763 { 00:14:03.763 "name": "BaseBdev3", 00:14:03.763 "uuid": "79bfc55d-c62f-4395-936c-c06b8b4dbb36", 00:14:03.763 "is_configured": true, 00:14:03.763 "data_offset": 2048, 00:14:03.763 "data_size": 63488 00:14:03.763 }, 00:14:03.763 { 00:14:03.763 "name": "BaseBdev4", 00:14:03.763 "uuid": "19d8e27d-426e-4053-833e-bca76de8ad43", 00:14:03.763 "is_configured": true, 00:14:03.763 "data_offset": 2048, 00:14:03.763 "data_size": 63488 00:14:03.763 } 00:14:03.763 ] 00:14:03.763 } 00:14:03.763 } 00:14:03.763 }' 00:14:03.763 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:04.023 BaseBdev2 00:14:04.023 BaseBdev3 00:14:04.023 BaseBdev4' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 [2024-12-07 01:58:09.440784] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.023 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.282 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.282 "name": "Existed_Raid", 00:14:04.282 "uuid": "bb3a95c3-2979-446d-91d3-36875795549c", 00:14:04.282 "strip_size_kb": 64, 00:14:04.282 "state": "online", 00:14:04.282 "raid_level": "raid5f", 00:14:04.282 "superblock": true, 00:14:04.282 "num_base_bdevs": 4, 00:14:04.282 "num_base_bdevs_discovered": 3, 00:14:04.282 "num_base_bdevs_operational": 3, 00:14:04.282 "base_bdevs_list": [ 00:14:04.282 { 00:14:04.282 "name": null, 00:14:04.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.282 "is_configured": false, 00:14:04.282 "data_offset": 0, 00:14:04.282 "data_size": 63488 00:14:04.282 }, 00:14:04.282 { 00:14:04.282 "name": "BaseBdev2", 00:14:04.282 "uuid": "df7f7438-d1ac-4162-b626-1f86cd1abe8f", 00:14:04.282 "is_configured": true, 00:14:04.282 "data_offset": 2048, 00:14:04.282 "data_size": 63488 00:14:04.282 }, 00:14:04.282 { 00:14:04.282 "name": "BaseBdev3", 00:14:04.282 "uuid": "79bfc55d-c62f-4395-936c-c06b8b4dbb36", 00:14:04.282 "is_configured": true, 00:14:04.282 "data_offset": 2048, 00:14:04.282 "data_size": 63488 00:14:04.282 }, 00:14:04.282 { 00:14:04.282 "name": "BaseBdev4", 00:14:04.282 "uuid": "19d8e27d-426e-4053-833e-bca76de8ad43", 00:14:04.282 "is_configured": true, 00:14:04.282 "data_offset": 2048, 00:14:04.282 "data_size": 63488 00:14:04.282 } 00:14:04.282 ] 00:14:04.282 }' 00:14:04.282 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.282 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.542 [2024-12-07 01:58:09.879391] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.542 [2024-12-07 01:58:09.879582] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.542 [2024-12-07 01:58:09.890357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.542 [2024-12-07 01:58:09.942304] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.542 01:58:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 [2024-12-07 01:58:10.013237] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:04.802 [2024-12-07 01:58:10.013280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:04.802 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 BaseBdev2 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 [ 00:14:04.803 { 00:14:04.803 "name": "BaseBdev2", 00:14:04.803 "aliases": [ 00:14:04.803 "66aba176-9c53-41c8-9fd0-229b901ae528" 00:14:04.803 ], 00:14:04.803 "product_name": "Malloc disk", 00:14:04.803 "block_size": 512, 00:14:04.803 "num_blocks": 65536, 00:14:04.803 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:04.803 "assigned_rate_limits": { 00:14:04.803 "rw_ios_per_sec": 0, 00:14:04.803 "rw_mbytes_per_sec": 0, 00:14:04.803 "r_mbytes_per_sec": 0, 00:14:04.803 "w_mbytes_per_sec": 0 00:14:04.803 }, 00:14:04.803 "claimed": false, 00:14:04.803 "zoned": false, 00:14:04.803 "supported_io_types": { 00:14:04.803 "read": true, 00:14:04.803 "write": true, 00:14:04.803 "unmap": true, 00:14:04.803 "flush": true, 00:14:04.803 "reset": true, 00:14:04.803 "nvme_admin": false, 00:14:04.803 "nvme_io": false, 00:14:04.803 "nvme_io_md": false, 00:14:04.803 "write_zeroes": true, 00:14:04.803 "zcopy": true, 00:14:04.803 "get_zone_info": false, 00:14:04.803 "zone_management": false, 00:14:04.803 "zone_append": false, 00:14:04.803 "compare": false, 00:14:04.803 "compare_and_write": false, 00:14:04.803 "abort": true, 00:14:04.803 "seek_hole": false, 00:14:04.803 "seek_data": false, 00:14:04.803 "copy": true, 00:14:04.803 "nvme_iov_md": false 00:14:04.803 }, 00:14:04.803 "memory_domains": [ 00:14:04.803 { 00:14:04.803 "dma_device_id": "system", 00:14:04.803 "dma_device_type": 1 00:14:04.803 }, 00:14:04.803 { 00:14:04.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.803 "dma_device_type": 2 00:14:04.803 } 00:14:04.803 ], 00:14:04.803 "driver_specific": {} 00:14:04.803 } 00:14:04.803 ] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 BaseBdev3 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 [ 00:14:04.803 { 00:14:04.803 "name": "BaseBdev3", 00:14:04.803 "aliases": [ 00:14:04.803 "d62c4211-9a7b-44a2-95b4-e39186216d7d" 00:14:04.803 ], 00:14:04.803 "product_name": "Malloc disk", 00:14:04.803 "block_size": 512, 00:14:04.803 "num_blocks": 65536, 00:14:04.803 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:04.803 "assigned_rate_limits": { 00:14:04.803 "rw_ios_per_sec": 0, 00:14:04.803 "rw_mbytes_per_sec": 0, 00:14:04.803 "r_mbytes_per_sec": 0, 00:14:04.803 "w_mbytes_per_sec": 0 00:14:04.803 }, 00:14:04.803 "claimed": false, 00:14:04.803 "zoned": false, 00:14:04.803 "supported_io_types": { 00:14:04.803 "read": true, 00:14:04.803 "write": true, 00:14:04.803 "unmap": true, 00:14:04.803 "flush": true, 00:14:04.803 "reset": true, 00:14:04.803 "nvme_admin": false, 00:14:04.803 "nvme_io": false, 00:14:04.803 "nvme_io_md": false, 00:14:04.803 "write_zeroes": true, 00:14:04.803 "zcopy": true, 00:14:04.803 "get_zone_info": false, 00:14:04.803 "zone_management": false, 00:14:04.803 "zone_append": false, 00:14:04.803 "compare": false, 00:14:04.803 "compare_and_write": false, 00:14:04.803 "abort": true, 00:14:04.803 "seek_hole": false, 00:14:04.803 "seek_data": false, 00:14:04.803 "copy": true, 00:14:04.803 "nvme_iov_md": false 00:14:04.803 }, 00:14:04.803 "memory_domains": [ 00:14:04.803 { 00:14:04.803 "dma_device_id": "system", 00:14:04.803 "dma_device_type": 1 00:14:04.803 }, 00:14:04.803 { 00:14:04.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.803 "dma_device_type": 2 00:14:04.803 } 00:14:04.803 ], 00:14:04.803 "driver_specific": {} 00:14:04.803 } 00:14:04.803 ] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 BaseBdev4 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.803 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.803 [ 00:14:04.803 { 00:14:04.803 "name": "BaseBdev4", 00:14:04.803 "aliases": [ 00:14:04.803 "f990153f-f3c7-4205-a9f7-f04da68eaa3c" 00:14:04.803 ], 00:14:04.803 "product_name": "Malloc disk", 00:14:04.803 "block_size": 512, 00:14:04.803 "num_blocks": 65536, 00:14:04.803 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:04.803 "assigned_rate_limits": { 00:14:04.803 "rw_ios_per_sec": 0, 00:14:04.803 "rw_mbytes_per_sec": 0, 00:14:04.803 "r_mbytes_per_sec": 0, 00:14:04.803 "w_mbytes_per_sec": 0 00:14:04.803 }, 00:14:04.803 "claimed": false, 00:14:04.803 "zoned": false, 00:14:04.803 "supported_io_types": { 00:14:04.803 "read": true, 00:14:04.803 "write": true, 00:14:04.803 "unmap": true, 00:14:04.803 "flush": true, 00:14:04.803 "reset": true, 00:14:04.803 "nvme_admin": false, 00:14:04.803 "nvme_io": false, 00:14:04.803 "nvme_io_md": false, 00:14:04.803 "write_zeroes": true, 00:14:04.803 "zcopy": true, 00:14:04.803 "get_zone_info": false, 00:14:04.804 "zone_management": false, 00:14:04.804 "zone_append": false, 00:14:04.804 "compare": false, 00:14:04.804 "compare_and_write": false, 00:14:04.804 "abort": true, 00:14:04.804 "seek_hole": false, 00:14:04.804 "seek_data": false, 00:14:04.804 "copy": true, 00:14:04.804 "nvme_iov_md": false 00:14:04.804 }, 00:14:04.804 "memory_domains": [ 00:14:04.804 { 00:14:04.804 "dma_device_id": "system", 00:14:04.804 "dma_device_type": 1 00:14:04.804 }, 00:14:04.804 { 00:14:04.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.804 "dma_device_type": 2 00:14:04.804 } 00:14:04.804 ], 00:14:04.804 "driver_specific": {} 00:14:04.804 } 00:14:04.804 ] 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.804 [2024-12-07 01:58:10.240538] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:04.804 [2024-12-07 01:58:10.240618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:04.804 [2024-12-07 01:58:10.240659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.804 [2024-12-07 01:58:10.242429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.804 [2024-12-07 01:58:10.242513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.804 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.063 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.063 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.063 "name": "Existed_Raid", 00:14:05.063 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:05.063 "strip_size_kb": 64, 00:14:05.063 "state": "configuring", 00:14:05.063 "raid_level": "raid5f", 00:14:05.063 "superblock": true, 00:14:05.063 "num_base_bdevs": 4, 00:14:05.063 "num_base_bdevs_discovered": 3, 00:14:05.063 "num_base_bdevs_operational": 4, 00:14:05.063 "base_bdevs_list": [ 00:14:05.063 { 00:14:05.063 "name": "BaseBdev1", 00:14:05.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.063 "is_configured": false, 00:14:05.063 "data_offset": 0, 00:14:05.063 "data_size": 0 00:14:05.063 }, 00:14:05.063 { 00:14:05.063 "name": "BaseBdev2", 00:14:05.063 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:05.063 "is_configured": true, 00:14:05.063 "data_offset": 2048, 00:14:05.063 "data_size": 63488 00:14:05.063 }, 00:14:05.063 { 00:14:05.063 "name": "BaseBdev3", 00:14:05.063 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:05.063 "is_configured": true, 00:14:05.063 "data_offset": 2048, 00:14:05.063 "data_size": 63488 00:14:05.063 }, 00:14:05.063 { 00:14:05.063 "name": "BaseBdev4", 00:14:05.063 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:05.063 "is_configured": true, 00:14:05.063 "data_offset": 2048, 00:14:05.063 "data_size": 63488 00:14:05.063 } 00:14:05.063 ] 00:14:05.063 }' 00:14:05.063 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.063 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.323 [2024-12-07 01:58:10.699748] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.323 "name": "Existed_Raid", 00:14:05.323 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:05.323 "strip_size_kb": 64, 00:14:05.323 "state": "configuring", 00:14:05.323 "raid_level": "raid5f", 00:14:05.323 "superblock": true, 00:14:05.323 "num_base_bdevs": 4, 00:14:05.323 "num_base_bdevs_discovered": 2, 00:14:05.323 "num_base_bdevs_operational": 4, 00:14:05.323 "base_bdevs_list": [ 00:14:05.323 { 00:14:05.323 "name": "BaseBdev1", 00:14:05.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.323 "is_configured": false, 00:14:05.323 "data_offset": 0, 00:14:05.323 "data_size": 0 00:14:05.323 }, 00:14:05.323 { 00:14:05.323 "name": null, 00:14:05.323 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:05.323 "is_configured": false, 00:14:05.323 "data_offset": 0, 00:14:05.323 "data_size": 63488 00:14:05.323 }, 00:14:05.323 { 00:14:05.323 "name": "BaseBdev3", 00:14:05.323 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:05.323 "is_configured": true, 00:14:05.323 "data_offset": 2048, 00:14:05.323 "data_size": 63488 00:14:05.323 }, 00:14:05.323 { 00:14:05.323 "name": "BaseBdev4", 00:14:05.323 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:05.323 "is_configured": true, 00:14:05.323 "data_offset": 2048, 00:14:05.323 "data_size": 63488 00:14:05.323 } 00:14:05.323 ] 00:14:05.323 }' 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.323 01:58:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.891 [2024-12-07 01:58:11.153867] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.891 BaseBdev1 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.891 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.891 [ 00:14:05.891 { 00:14:05.891 "name": "BaseBdev1", 00:14:05.891 "aliases": [ 00:14:05.891 "89dd5708-85d7-44a3-a371-3efdbb64a0a1" 00:14:05.891 ], 00:14:05.891 "product_name": "Malloc disk", 00:14:05.891 "block_size": 512, 00:14:05.891 "num_blocks": 65536, 00:14:05.891 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:05.891 "assigned_rate_limits": { 00:14:05.891 "rw_ios_per_sec": 0, 00:14:05.891 "rw_mbytes_per_sec": 0, 00:14:05.891 "r_mbytes_per_sec": 0, 00:14:05.891 "w_mbytes_per_sec": 0 00:14:05.891 }, 00:14:05.891 "claimed": true, 00:14:05.891 "claim_type": "exclusive_write", 00:14:05.891 "zoned": false, 00:14:05.891 "supported_io_types": { 00:14:05.891 "read": true, 00:14:05.891 "write": true, 00:14:05.891 "unmap": true, 00:14:05.891 "flush": true, 00:14:05.891 "reset": true, 00:14:05.891 "nvme_admin": false, 00:14:05.891 "nvme_io": false, 00:14:05.891 "nvme_io_md": false, 00:14:05.891 "write_zeroes": true, 00:14:05.891 "zcopy": true, 00:14:05.891 "get_zone_info": false, 00:14:05.891 "zone_management": false, 00:14:05.891 "zone_append": false, 00:14:05.891 "compare": false, 00:14:05.891 "compare_and_write": false, 00:14:05.891 "abort": true, 00:14:05.892 "seek_hole": false, 00:14:05.892 "seek_data": false, 00:14:05.892 "copy": true, 00:14:05.892 "nvme_iov_md": false 00:14:05.892 }, 00:14:05.892 "memory_domains": [ 00:14:05.892 { 00:14:05.892 "dma_device_id": "system", 00:14:05.892 "dma_device_type": 1 00:14:05.892 }, 00:14:05.892 { 00:14:05.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.892 "dma_device_type": 2 00:14:05.892 } 00:14:05.892 ], 00:14:05.892 "driver_specific": {} 00:14:05.892 } 00:14:05.892 ] 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.892 "name": "Existed_Raid", 00:14:05.892 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:05.892 "strip_size_kb": 64, 00:14:05.892 "state": "configuring", 00:14:05.892 "raid_level": "raid5f", 00:14:05.892 "superblock": true, 00:14:05.892 "num_base_bdevs": 4, 00:14:05.892 "num_base_bdevs_discovered": 3, 00:14:05.892 "num_base_bdevs_operational": 4, 00:14:05.892 "base_bdevs_list": [ 00:14:05.892 { 00:14:05.892 "name": "BaseBdev1", 00:14:05.892 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:05.892 "is_configured": true, 00:14:05.892 "data_offset": 2048, 00:14:05.892 "data_size": 63488 00:14:05.892 }, 00:14:05.892 { 00:14:05.892 "name": null, 00:14:05.892 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:05.892 "is_configured": false, 00:14:05.892 "data_offset": 0, 00:14:05.892 "data_size": 63488 00:14:05.892 }, 00:14:05.892 { 00:14:05.892 "name": "BaseBdev3", 00:14:05.892 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:05.892 "is_configured": true, 00:14:05.892 "data_offset": 2048, 00:14:05.892 "data_size": 63488 00:14:05.892 }, 00:14:05.892 { 00:14:05.892 "name": "BaseBdev4", 00:14:05.892 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:05.892 "is_configured": true, 00:14:05.892 "data_offset": 2048, 00:14:05.892 "data_size": 63488 00:14:05.892 } 00:14:05.892 ] 00:14:05.892 }' 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.892 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 [2024-12-07 01:58:11.677023] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.460 "name": "Existed_Raid", 00:14:06.460 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:06.460 "strip_size_kb": 64, 00:14:06.460 "state": "configuring", 00:14:06.460 "raid_level": "raid5f", 00:14:06.460 "superblock": true, 00:14:06.460 "num_base_bdevs": 4, 00:14:06.460 "num_base_bdevs_discovered": 2, 00:14:06.460 "num_base_bdevs_operational": 4, 00:14:06.460 "base_bdevs_list": [ 00:14:06.460 { 00:14:06.460 "name": "BaseBdev1", 00:14:06.460 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:06.460 "is_configured": true, 00:14:06.460 "data_offset": 2048, 00:14:06.460 "data_size": 63488 00:14:06.460 }, 00:14:06.460 { 00:14:06.460 "name": null, 00:14:06.460 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:06.460 "is_configured": false, 00:14:06.460 "data_offset": 0, 00:14:06.460 "data_size": 63488 00:14:06.460 }, 00:14:06.460 { 00:14:06.460 "name": null, 00:14:06.460 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:06.460 "is_configured": false, 00:14:06.460 "data_offset": 0, 00:14:06.460 "data_size": 63488 00:14:06.460 }, 00:14:06.460 { 00:14:06.460 "name": "BaseBdev4", 00:14:06.460 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:06.460 "is_configured": true, 00:14:06.460 "data_offset": 2048, 00:14:06.460 "data_size": 63488 00:14:06.460 } 00:14:06.460 ] 00:14:06.460 }' 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.460 01:58:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.719 [2024-12-07 01:58:12.140288] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.719 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.978 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.978 "name": "Existed_Raid", 00:14:06.978 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:06.978 "strip_size_kb": 64, 00:14:06.978 "state": "configuring", 00:14:06.978 "raid_level": "raid5f", 00:14:06.978 "superblock": true, 00:14:06.978 "num_base_bdevs": 4, 00:14:06.978 "num_base_bdevs_discovered": 3, 00:14:06.978 "num_base_bdevs_operational": 4, 00:14:06.978 "base_bdevs_list": [ 00:14:06.978 { 00:14:06.978 "name": "BaseBdev1", 00:14:06.978 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:06.978 "is_configured": true, 00:14:06.978 "data_offset": 2048, 00:14:06.978 "data_size": 63488 00:14:06.978 }, 00:14:06.978 { 00:14:06.978 "name": null, 00:14:06.978 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:06.978 "is_configured": false, 00:14:06.978 "data_offset": 0, 00:14:06.978 "data_size": 63488 00:14:06.978 }, 00:14:06.978 { 00:14:06.978 "name": "BaseBdev3", 00:14:06.978 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:06.978 "is_configured": true, 00:14:06.978 "data_offset": 2048, 00:14:06.978 "data_size": 63488 00:14:06.979 }, 00:14:06.979 { 00:14:06.979 "name": "BaseBdev4", 00:14:06.979 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:06.979 "is_configured": true, 00:14:06.979 "data_offset": 2048, 00:14:06.979 "data_size": 63488 00:14:06.979 } 00:14:06.979 ] 00:14:06.979 }' 00:14:06.979 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.979 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.238 [2024-12-07 01:58:12.631534] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.238 "name": "Existed_Raid", 00:14:07.238 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:07.238 "strip_size_kb": 64, 00:14:07.238 "state": "configuring", 00:14:07.238 "raid_level": "raid5f", 00:14:07.238 "superblock": true, 00:14:07.238 "num_base_bdevs": 4, 00:14:07.238 "num_base_bdevs_discovered": 2, 00:14:07.238 "num_base_bdevs_operational": 4, 00:14:07.238 "base_bdevs_list": [ 00:14:07.238 { 00:14:07.238 "name": null, 00:14:07.238 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:07.238 "is_configured": false, 00:14:07.238 "data_offset": 0, 00:14:07.238 "data_size": 63488 00:14:07.238 }, 00:14:07.238 { 00:14:07.238 "name": null, 00:14:07.238 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:07.238 "is_configured": false, 00:14:07.238 "data_offset": 0, 00:14:07.238 "data_size": 63488 00:14:07.238 }, 00:14:07.238 { 00:14:07.238 "name": "BaseBdev3", 00:14:07.238 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:07.238 "is_configured": true, 00:14:07.238 "data_offset": 2048, 00:14:07.238 "data_size": 63488 00:14:07.238 }, 00:14:07.238 { 00:14:07.238 "name": "BaseBdev4", 00:14:07.238 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:07.238 "is_configured": true, 00:14:07.238 "data_offset": 2048, 00:14:07.238 "data_size": 63488 00:14:07.238 } 00:14:07.238 ] 00:14:07.238 }' 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.238 01:58:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.805 [2024-12-07 01:58:13.133075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.805 "name": "Existed_Raid", 00:14:07.805 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:07.805 "strip_size_kb": 64, 00:14:07.805 "state": "configuring", 00:14:07.805 "raid_level": "raid5f", 00:14:07.805 "superblock": true, 00:14:07.805 "num_base_bdevs": 4, 00:14:07.805 "num_base_bdevs_discovered": 3, 00:14:07.805 "num_base_bdevs_operational": 4, 00:14:07.805 "base_bdevs_list": [ 00:14:07.805 { 00:14:07.805 "name": null, 00:14:07.805 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:07.805 "is_configured": false, 00:14:07.805 "data_offset": 0, 00:14:07.805 "data_size": 63488 00:14:07.805 }, 00:14:07.805 { 00:14:07.805 "name": "BaseBdev2", 00:14:07.805 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:07.805 "is_configured": true, 00:14:07.805 "data_offset": 2048, 00:14:07.805 "data_size": 63488 00:14:07.805 }, 00:14:07.805 { 00:14:07.805 "name": "BaseBdev3", 00:14:07.805 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:07.805 "is_configured": true, 00:14:07.805 "data_offset": 2048, 00:14:07.805 "data_size": 63488 00:14:07.805 }, 00:14:07.805 { 00:14:07.805 "name": "BaseBdev4", 00:14:07.805 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:07.805 "is_configured": true, 00:14:07.805 "data_offset": 2048, 00:14:07.805 "data_size": 63488 00:14:07.805 } 00:14:07.805 ] 00:14:07.805 }' 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.805 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.063 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.063 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.063 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.063 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 89dd5708-85d7-44a3-a371-3efdbb64a0a1 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.322 [2024-12-07 01:58:13.630907] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:08.322 NewBaseBdev 00:14:08.322 [2024-12-07 01:58:13.631166] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:08.322 [2024-12-07 01:58:13.631183] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:08.322 [2024-12-07 01:58:13.631438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:14:08.322 [2024-12-07 01:58:13.631899] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:08.322 [2024-12-07 01:58:13.631914] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:14:08.322 [2024-12-07 01:58:13.632012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:08.322 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.323 [ 00:14:08.323 { 00:14:08.323 "name": "NewBaseBdev", 00:14:08.323 "aliases": [ 00:14:08.323 "89dd5708-85d7-44a3-a371-3efdbb64a0a1" 00:14:08.323 ], 00:14:08.323 "product_name": "Malloc disk", 00:14:08.323 "block_size": 512, 00:14:08.323 "num_blocks": 65536, 00:14:08.323 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:08.323 "assigned_rate_limits": { 00:14:08.323 "rw_ios_per_sec": 0, 00:14:08.323 "rw_mbytes_per_sec": 0, 00:14:08.323 "r_mbytes_per_sec": 0, 00:14:08.323 "w_mbytes_per_sec": 0 00:14:08.323 }, 00:14:08.323 "claimed": true, 00:14:08.323 "claim_type": "exclusive_write", 00:14:08.323 "zoned": false, 00:14:08.323 "supported_io_types": { 00:14:08.323 "read": true, 00:14:08.323 "write": true, 00:14:08.323 "unmap": true, 00:14:08.323 "flush": true, 00:14:08.323 "reset": true, 00:14:08.323 "nvme_admin": false, 00:14:08.323 "nvme_io": false, 00:14:08.323 "nvme_io_md": false, 00:14:08.323 "write_zeroes": true, 00:14:08.323 "zcopy": true, 00:14:08.323 "get_zone_info": false, 00:14:08.323 "zone_management": false, 00:14:08.323 "zone_append": false, 00:14:08.323 "compare": false, 00:14:08.323 "compare_and_write": false, 00:14:08.323 "abort": true, 00:14:08.323 "seek_hole": false, 00:14:08.323 "seek_data": false, 00:14:08.323 "copy": true, 00:14:08.323 "nvme_iov_md": false 00:14:08.323 }, 00:14:08.323 "memory_domains": [ 00:14:08.323 { 00:14:08.323 "dma_device_id": "system", 00:14:08.323 "dma_device_type": 1 00:14:08.323 }, 00:14:08.323 { 00:14:08.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.323 "dma_device_type": 2 00:14:08.323 } 00:14:08.323 ], 00:14:08.323 "driver_specific": {} 00:14:08.323 } 00:14:08.323 ] 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.323 "name": "Existed_Raid", 00:14:08.323 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:08.323 "strip_size_kb": 64, 00:14:08.323 "state": "online", 00:14:08.323 "raid_level": "raid5f", 00:14:08.323 "superblock": true, 00:14:08.323 "num_base_bdevs": 4, 00:14:08.323 "num_base_bdevs_discovered": 4, 00:14:08.323 "num_base_bdevs_operational": 4, 00:14:08.323 "base_bdevs_list": [ 00:14:08.323 { 00:14:08.323 "name": "NewBaseBdev", 00:14:08.323 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:08.323 "is_configured": true, 00:14:08.323 "data_offset": 2048, 00:14:08.323 "data_size": 63488 00:14:08.323 }, 00:14:08.323 { 00:14:08.323 "name": "BaseBdev2", 00:14:08.323 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:08.323 "is_configured": true, 00:14:08.323 "data_offset": 2048, 00:14:08.323 "data_size": 63488 00:14:08.323 }, 00:14:08.323 { 00:14:08.323 "name": "BaseBdev3", 00:14:08.323 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:08.323 "is_configured": true, 00:14:08.323 "data_offset": 2048, 00:14:08.323 "data_size": 63488 00:14:08.323 }, 00:14:08.323 { 00:14:08.323 "name": "BaseBdev4", 00:14:08.323 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:08.323 "is_configured": true, 00:14:08.323 "data_offset": 2048, 00:14:08.323 "data_size": 63488 00:14:08.323 } 00:14:08.323 ] 00:14:08.323 }' 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.323 01:58:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.890 [2024-12-07 01:58:14.086324] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.890 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:08.890 "name": "Existed_Raid", 00:14:08.890 "aliases": [ 00:14:08.890 "16660573-382a-4608-afd5-265a9f4e9de4" 00:14:08.890 ], 00:14:08.890 "product_name": "Raid Volume", 00:14:08.890 "block_size": 512, 00:14:08.890 "num_blocks": 190464, 00:14:08.890 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:08.890 "assigned_rate_limits": { 00:14:08.890 "rw_ios_per_sec": 0, 00:14:08.890 "rw_mbytes_per_sec": 0, 00:14:08.890 "r_mbytes_per_sec": 0, 00:14:08.890 "w_mbytes_per_sec": 0 00:14:08.890 }, 00:14:08.890 "claimed": false, 00:14:08.890 "zoned": false, 00:14:08.890 "supported_io_types": { 00:14:08.890 "read": true, 00:14:08.890 "write": true, 00:14:08.890 "unmap": false, 00:14:08.890 "flush": false, 00:14:08.890 "reset": true, 00:14:08.890 "nvme_admin": false, 00:14:08.890 "nvme_io": false, 00:14:08.890 "nvme_io_md": false, 00:14:08.890 "write_zeroes": true, 00:14:08.890 "zcopy": false, 00:14:08.890 "get_zone_info": false, 00:14:08.890 "zone_management": false, 00:14:08.890 "zone_append": false, 00:14:08.890 "compare": false, 00:14:08.890 "compare_and_write": false, 00:14:08.890 "abort": false, 00:14:08.890 "seek_hole": false, 00:14:08.890 "seek_data": false, 00:14:08.890 "copy": false, 00:14:08.890 "nvme_iov_md": false 00:14:08.890 }, 00:14:08.890 "driver_specific": { 00:14:08.891 "raid": { 00:14:08.891 "uuid": "16660573-382a-4608-afd5-265a9f4e9de4", 00:14:08.891 "strip_size_kb": 64, 00:14:08.891 "state": "online", 00:14:08.891 "raid_level": "raid5f", 00:14:08.891 "superblock": true, 00:14:08.891 "num_base_bdevs": 4, 00:14:08.891 "num_base_bdevs_discovered": 4, 00:14:08.891 "num_base_bdevs_operational": 4, 00:14:08.891 "base_bdevs_list": [ 00:14:08.891 { 00:14:08.891 "name": "NewBaseBdev", 00:14:08.891 "uuid": "89dd5708-85d7-44a3-a371-3efdbb64a0a1", 00:14:08.891 "is_configured": true, 00:14:08.891 "data_offset": 2048, 00:14:08.891 "data_size": 63488 00:14:08.891 }, 00:14:08.891 { 00:14:08.891 "name": "BaseBdev2", 00:14:08.891 "uuid": "66aba176-9c53-41c8-9fd0-229b901ae528", 00:14:08.891 "is_configured": true, 00:14:08.891 "data_offset": 2048, 00:14:08.891 "data_size": 63488 00:14:08.891 }, 00:14:08.891 { 00:14:08.891 "name": "BaseBdev3", 00:14:08.891 "uuid": "d62c4211-9a7b-44a2-95b4-e39186216d7d", 00:14:08.891 "is_configured": true, 00:14:08.891 "data_offset": 2048, 00:14:08.891 "data_size": 63488 00:14:08.891 }, 00:14:08.891 { 00:14:08.891 "name": "BaseBdev4", 00:14:08.891 "uuid": "f990153f-f3c7-4205-a9f7-f04da68eaa3c", 00:14:08.891 "is_configured": true, 00:14:08.891 "data_offset": 2048, 00:14:08.891 "data_size": 63488 00:14:08.891 } 00:14:08.891 ] 00:14:08.891 } 00:14:08.891 } 00:14:08.891 }' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:08.891 BaseBdev2 00:14:08.891 BaseBdev3 00:14:08.891 BaseBdev4' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.891 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.151 [2024-12-07 01:58:14.425576] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.151 [2024-12-07 01:58:14.425603] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:09.151 [2024-12-07 01:58:14.425676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:09.151 [2024-12-07 01:58:14.425935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:09.151 [2024-12-07 01:58:14.425947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93591 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 93591 ']' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 93591 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93591 00:14:09.151 killing process with pid 93591 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93591' 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 93591 00:14:09.151 [2024-12-07 01:58:14.475971] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:09.151 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 93591 00:14:09.151 [2024-12-07 01:58:14.515632] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.411 01:58:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:09.411 00:14:09.411 real 0m9.453s 00:14:09.411 user 0m16.098s 00:14:09.411 sys 0m2.020s 00:14:09.411 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.411 01:58:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.411 ************************************ 00:14:09.411 END TEST raid5f_state_function_test_sb 00:14:09.411 ************************************ 00:14:09.411 01:58:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:14:09.411 01:58:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:09.411 01:58:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.411 01:58:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:09.411 ************************************ 00:14:09.411 START TEST raid5f_superblock_test 00:14:09.411 ************************************ 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:09.411 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94240 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94240 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 94240 ']' 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.412 01:58:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.671 [2024-12-07 01:58:14.917045] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:09.671 [2024-12-07 01:58:14.917672] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94240 ] 00:14:09.671 [2024-12-07 01:58:15.061720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.671 [2024-12-07 01:58:15.105113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.929 [2024-12-07 01:58:15.146368] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.929 [2024-12-07 01:58:15.146486] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.495 malloc1 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:10.495 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 [2024-12-07 01:58:15.763869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:10.496 [2024-12-07 01:58:15.763970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.496 [2024-12-07 01:58:15.763990] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:10.496 [2024-12-07 01:58:15.764019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.496 [2024-12-07 01:58:15.766045] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.496 [2024-12-07 01:58:15.766084] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:10.496 pt1 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 malloc2 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 [2024-12-07 01:58:15.807992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:10.496 [2024-12-07 01:58:15.808175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.496 [2024-12-07 01:58:15.808249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:10.496 [2024-12-07 01:58:15.808329] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.496 [2024-12-07 01:58:15.812851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.496 [2024-12-07 01:58:15.812994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:10.496 pt2 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 malloc3 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 [2024-12-07 01:58:15.842182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:10.496 [2024-12-07 01:58:15.842272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.496 [2024-12-07 01:58:15.842304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:10.496 [2024-12-07 01:58:15.842333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.496 [2024-12-07 01:58:15.844365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.496 [2024-12-07 01:58:15.844437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:10.496 pt3 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 malloc4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 [2024-12-07 01:58:15.874417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:10.496 [2024-12-07 01:58:15.874465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.496 [2024-12-07 01:58:15.874481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:10.496 [2024-12-07 01:58:15.874494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.496 [2024-12-07 01:58:15.876523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.496 [2024-12-07 01:58:15.876559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:10.496 pt4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 [2024-12-07 01:58:15.886435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:10.496 [2024-12-07 01:58:15.888260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:10.496 [2024-12-07 01:58:15.888323] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:10.496 [2024-12-07 01:58:15.888364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:10.496 [2024-12-07 01:58:15.888520] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:10.496 [2024-12-07 01:58:15.888532] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:10.496 [2024-12-07 01:58:15.888773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:10.496 [2024-12-07 01:58:15.889220] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:10.496 [2024-12-07 01:58:15.889238] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:10.496 [2024-12-07 01:58:15.889362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.496 "name": "raid_bdev1", 00:14:10.496 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:10.496 "strip_size_kb": 64, 00:14:10.496 "state": "online", 00:14:10.496 "raid_level": "raid5f", 00:14:10.496 "superblock": true, 00:14:10.496 "num_base_bdevs": 4, 00:14:10.496 "num_base_bdevs_discovered": 4, 00:14:10.496 "num_base_bdevs_operational": 4, 00:14:10.496 "base_bdevs_list": [ 00:14:10.496 { 00:14:10.496 "name": "pt1", 00:14:10.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.496 "is_configured": true, 00:14:10.496 "data_offset": 2048, 00:14:10.496 "data_size": 63488 00:14:10.496 }, 00:14:10.496 { 00:14:10.496 "name": "pt2", 00:14:10.496 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.496 "is_configured": true, 00:14:10.496 "data_offset": 2048, 00:14:10.496 "data_size": 63488 00:14:10.496 }, 00:14:10.496 { 00:14:10.496 "name": "pt3", 00:14:10.496 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:10.496 "is_configured": true, 00:14:10.496 "data_offset": 2048, 00:14:10.496 "data_size": 63488 00:14:10.496 }, 00:14:10.496 { 00:14:10.496 "name": "pt4", 00:14:10.496 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:10.496 "is_configured": true, 00:14:10.496 "data_offset": 2048, 00:14:10.496 "data_size": 63488 00:14:10.496 } 00:14:10.496 ] 00:14:10.496 }' 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.496 01:58:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.063 [2024-12-07 01:58:16.278623] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.063 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.063 "name": "raid_bdev1", 00:14:11.063 "aliases": [ 00:14:11.063 "808654a6-7b82-4efe-b382-ea96de8edfae" 00:14:11.063 ], 00:14:11.063 "product_name": "Raid Volume", 00:14:11.063 "block_size": 512, 00:14:11.063 "num_blocks": 190464, 00:14:11.063 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:11.063 "assigned_rate_limits": { 00:14:11.063 "rw_ios_per_sec": 0, 00:14:11.063 "rw_mbytes_per_sec": 0, 00:14:11.063 "r_mbytes_per_sec": 0, 00:14:11.063 "w_mbytes_per_sec": 0 00:14:11.063 }, 00:14:11.063 "claimed": false, 00:14:11.063 "zoned": false, 00:14:11.063 "supported_io_types": { 00:14:11.064 "read": true, 00:14:11.064 "write": true, 00:14:11.064 "unmap": false, 00:14:11.064 "flush": false, 00:14:11.064 "reset": true, 00:14:11.064 "nvme_admin": false, 00:14:11.064 "nvme_io": false, 00:14:11.064 "nvme_io_md": false, 00:14:11.064 "write_zeroes": true, 00:14:11.064 "zcopy": false, 00:14:11.064 "get_zone_info": false, 00:14:11.064 "zone_management": false, 00:14:11.064 "zone_append": false, 00:14:11.064 "compare": false, 00:14:11.064 "compare_and_write": false, 00:14:11.064 "abort": false, 00:14:11.064 "seek_hole": false, 00:14:11.064 "seek_data": false, 00:14:11.064 "copy": false, 00:14:11.064 "nvme_iov_md": false 00:14:11.064 }, 00:14:11.064 "driver_specific": { 00:14:11.064 "raid": { 00:14:11.064 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:11.064 "strip_size_kb": 64, 00:14:11.064 "state": "online", 00:14:11.064 "raid_level": "raid5f", 00:14:11.064 "superblock": true, 00:14:11.064 "num_base_bdevs": 4, 00:14:11.064 "num_base_bdevs_discovered": 4, 00:14:11.064 "num_base_bdevs_operational": 4, 00:14:11.064 "base_bdevs_list": [ 00:14:11.064 { 00:14:11.064 "name": "pt1", 00:14:11.064 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:11.064 "is_configured": true, 00:14:11.064 "data_offset": 2048, 00:14:11.064 "data_size": 63488 00:14:11.064 }, 00:14:11.064 { 00:14:11.064 "name": "pt2", 00:14:11.064 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.064 "is_configured": true, 00:14:11.064 "data_offset": 2048, 00:14:11.064 "data_size": 63488 00:14:11.064 }, 00:14:11.064 { 00:14:11.064 "name": "pt3", 00:14:11.064 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.064 "is_configured": true, 00:14:11.064 "data_offset": 2048, 00:14:11.064 "data_size": 63488 00:14:11.064 }, 00:14:11.064 { 00:14:11.064 "name": "pt4", 00:14:11.064 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:11.064 "is_configured": true, 00:14:11.064 "data_offset": 2048, 00:14:11.064 "data_size": 63488 00:14:11.064 } 00:14:11.064 ] 00:14:11.064 } 00:14:11.064 } 00:14:11.064 }' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:11.064 pt2 00:14:11.064 pt3 00:14:11.064 pt4' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.064 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 [2024-12-07 01:58:16.566058] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=808654a6-7b82-4efe-b382-ea96de8edfae 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 808654a6-7b82-4efe-b382-ea96de8edfae ']' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 [2024-12-07 01:58:16.609843] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.324 [2024-12-07 01:58:16.609910] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.324 [2024-12-07 01:58:16.610017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.324 [2024-12-07 01:58:16.610138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.324 [2024-12-07 01:58:16.610198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.324 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.324 [2024-12-07 01:58:16.773592] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:11.324 [2024-12-07 01:58:16.775375] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:11.324 [2024-12-07 01:58:16.775419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:11.324 [2024-12-07 01:58:16.775455] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:11.324 [2024-12-07 01:58:16.775502] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:11.324 [2024-12-07 01:58:16.775548] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:11.324 [2024-12-07 01:58:16.775566] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:11.324 [2024-12-07 01:58:16.775582] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:11.324 [2024-12-07 01:58:16.775595] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.325 [2024-12-07 01:58:16.775606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:11.325 request: 00:14:11.325 { 00:14:11.325 "name": "raid_bdev1", 00:14:11.325 "raid_level": "raid5f", 00:14:11.325 "base_bdevs": [ 00:14:11.325 "malloc1", 00:14:11.325 "malloc2", 00:14:11.325 "malloc3", 00:14:11.325 "malloc4" 00:14:11.325 ], 00:14:11.325 "strip_size_kb": 64, 00:14:11.325 "superblock": false, 00:14:11.325 "method": "bdev_raid_create", 00:14:11.325 "req_id": 1 00:14:11.325 } 00:14:11.325 Got JSON-RPC error response 00:14:11.325 response: 00:14:11.325 { 00:14:11.325 "code": -17, 00:14:11.325 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:11.325 } 00:14:11.325 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:11.325 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:11.325 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.325 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.325 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.584 [2024-12-07 01:58:16.825450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:11.584 [2024-12-07 01:58:16.825532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.584 [2024-12-07 01:58:16.825574] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:11.584 [2024-12-07 01:58:16.825605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.584 [2024-12-07 01:58:16.827756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.584 [2024-12-07 01:58:16.827831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:11.584 [2024-12-07 01:58:16.827918] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:11.584 [2024-12-07 01:58:16.827983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:11.584 pt1 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.584 "name": "raid_bdev1", 00:14:11.584 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:11.584 "strip_size_kb": 64, 00:14:11.584 "state": "configuring", 00:14:11.584 "raid_level": "raid5f", 00:14:11.584 "superblock": true, 00:14:11.584 "num_base_bdevs": 4, 00:14:11.584 "num_base_bdevs_discovered": 1, 00:14:11.584 "num_base_bdevs_operational": 4, 00:14:11.584 "base_bdevs_list": [ 00:14:11.584 { 00:14:11.584 "name": "pt1", 00:14:11.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:11.584 "is_configured": true, 00:14:11.584 "data_offset": 2048, 00:14:11.584 "data_size": 63488 00:14:11.584 }, 00:14:11.584 { 00:14:11.584 "name": null, 00:14:11.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:11.584 "is_configured": false, 00:14:11.584 "data_offset": 2048, 00:14:11.584 "data_size": 63488 00:14:11.584 }, 00:14:11.584 { 00:14:11.584 "name": null, 00:14:11.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:11.584 "is_configured": false, 00:14:11.584 "data_offset": 2048, 00:14:11.584 "data_size": 63488 00:14:11.584 }, 00:14:11.584 { 00:14:11.584 "name": null, 00:14:11.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:11.584 "is_configured": false, 00:14:11.584 "data_offset": 2048, 00:14:11.584 "data_size": 63488 00:14:11.584 } 00:14:11.584 ] 00:14:11.584 }' 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.584 01:58:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.151 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:12.151 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:12.151 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.151 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.151 [2024-12-07 01:58:17.312629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:12.151 [2024-12-07 01:58:17.312751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.152 [2024-12-07 01:58:17.312790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:12.152 [2024-12-07 01:58:17.312818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.152 [2024-12-07 01:58:17.313198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.152 [2024-12-07 01:58:17.313250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:12.152 [2024-12-07 01:58:17.313319] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:12.152 [2024-12-07 01:58:17.313347] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:12.152 pt2 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.152 [2024-12-07 01:58:17.324644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.152 "name": "raid_bdev1", 00:14:12.152 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:12.152 "strip_size_kb": 64, 00:14:12.152 "state": "configuring", 00:14:12.152 "raid_level": "raid5f", 00:14:12.152 "superblock": true, 00:14:12.152 "num_base_bdevs": 4, 00:14:12.152 "num_base_bdevs_discovered": 1, 00:14:12.152 "num_base_bdevs_operational": 4, 00:14:12.152 "base_bdevs_list": [ 00:14:12.152 { 00:14:12.152 "name": "pt1", 00:14:12.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:12.152 "is_configured": true, 00:14:12.152 "data_offset": 2048, 00:14:12.152 "data_size": 63488 00:14:12.152 }, 00:14:12.152 { 00:14:12.152 "name": null, 00:14:12.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.152 "is_configured": false, 00:14:12.152 "data_offset": 0, 00:14:12.152 "data_size": 63488 00:14:12.152 }, 00:14:12.152 { 00:14:12.152 "name": null, 00:14:12.152 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.152 "is_configured": false, 00:14:12.152 "data_offset": 2048, 00:14:12.152 "data_size": 63488 00:14:12.152 }, 00:14:12.152 { 00:14:12.152 "name": null, 00:14:12.152 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:12.152 "is_configured": false, 00:14:12.152 "data_offset": 2048, 00:14:12.152 "data_size": 63488 00:14:12.152 } 00:14:12.152 ] 00:14:12.152 }' 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.152 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.411 [2024-12-07 01:58:17.727927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:12.411 [2024-12-07 01:58:17.728025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.411 [2024-12-07 01:58:17.728058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:12.411 [2024-12-07 01:58:17.728086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.411 [2024-12-07 01:58:17.728480] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.411 [2024-12-07 01:58:17.728545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:12.411 [2024-12-07 01:58:17.728640] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:12.411 [2024-12-07 01:58:17.728700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:12.411 pt2 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.411 [2024-12-07 01:58:17.739901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:12.411 [2024-12-07 01:58:17.739981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.411 [2024-12-07 01:58:17.740013] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:12.411 [2024-12-07 01:58:17.740050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.411 [2024-12-07 01:58:17.740377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.411 [2024-12-07 01:58:17.740429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:12.411 [2024-12-07 01:58:17.740507] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:12.411 [2024-12-07 01:58:17.740552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:12.411 pt3 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.411 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.411 [2024-12-07 01:58:17.751908] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:12.411 [2024-12-07 01:58:17.752000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.411 [2024-12-07 01:58:17.752028] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:12.411 [2024-12-07 01:58:17.752055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.411 [2024-12-07 01:58:17.752340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.411 [2024-12-07 01:58:17.752393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:12.411 [2024-12-07 01:58:17.752443] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:12.411 [2024-12-07 01:58:17.752461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:12.411 [2024-12-07 01:58:17.752570] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:12.411 [2024-12-07 01:58:17.752585] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:12.411 [2024-12-07 01:58:17.752814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:12.411 [2024-12-07 01:58:17.753230] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:12.411 [2024-12-07 01:58:17.753240] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:12.412 [2024-12-07 01:58:17.753331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.412 pt4 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.412 "name": "raid_bdev1", 00:14:12.412 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:12.412 "strip_size_kb": 64, 00:14:12.412 "state": "online", 00:14:12.412 "raid_level": "raid5f", 00:14:12.412 "superblock": true, 00:14:12.412 "num_base_bdevs": 4, 00:14:12.412 "num_base_bdevs_discovered": 4, 00:14:12.412 "num_base_bdevs_operational": 4, 00:14:12.412 "base_bdevs_list": [ 00:14:12.412 { 00:14:12.412 "name": "pt1", 00:14:12.412 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:12.412 "is_configured": true, 00:14:12.412 "data_offset": 2048, 00:14:12.412 "data_size": 63488 00:14:12.412 }, 00:14:12.412 { 00:14:12.412 "name": "pt2", 00:14:12.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.412 "is_configured": true, 00:14:12.412 "data_offset": 2048, 00:14:12.412 "data_size": 63488 00:14:12.412 }, 00:14:12.412 { 00:14:12.412 "name": "pt3", 00:14:12.412 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.412 "is_configured": true, 00:14:12.412 "data_offset": 2048, 00:14:12.412 "data_size": 63488 00:14:12.412 }, 00:14:12.412 { 00:14:12.412 "name": "pt4", 00:14:12.412 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:12.412 "is_configured": true, 00:14:12.412 "data_offset": 2048, 00:14:12.412 "data_size": 63488 00:14:12.412 } 00:14:12.412 ] 00:14:12.412 }' 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.412 01:58:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.979 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:12.980 [2024-12-07 01:58:18.187382] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:12.980 "name": "raid_bdev1", 00:14:12.980 "aliases": [ 00:14:12.980 "808654a6-7b82-4efe-b382-ea96de8edfae" 00:14:12.980 ], 00:14:12.980 "product_name": "Raid Volume", 00:14:12.980 "block_size": 512, 00:14:12.980 "num_blocks": 190464, 00:14:12.980 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:12.980 "assigned_rate_limits": { 00:14:12.980 "rw_ios_per_sec": 0, 00:14:12.980 "rw_mbytes_per_sec": 0, 00:14:12.980 "r_mbytes_per_sec": 0, 00:14:12.980 "w_mbytes_per_sec": 0 00:14:12.980 }, 00:14:12.980 "claimed": false, 00:14:12.980 "zoned": false, 00:14:12.980 "supported_io_types": { 00:14:12.980 "read": true, 00:14:12.980 "write": true, 00:14:12.980 "unmap": false, 00:14:12.980 "flush": false, 00:14:12.980 "reset": true, 00:14:12.980 "nvme_admin": false, 00:14:12.980 "nvme_io": false, 00:14:12.980 "nvme_io_md": false, 00:14:12.980 "write_zeroes": true, 00:14:12.980 "zcopy": false, 00:14:12.980 "get_zone_info": false, 00:14:12.980 "zone_management": false, 00:14:12.980 "zone_append": false, 00:14:12.980 "compare": false, 00:14:12.980 "compare_and_write": false, 00:14:12.980 "abort": false, 00:14:12.980 "seek_hole": false, 00:14:12.980 "seek_data": false, 00:14:12.980 "copy": false, 00:14:12.980 "nvme_iov_md": false 00:14:12.980 }, 00:14:12.980 "driver_specific": { 00:14:12.980 "raid": { 00:14:12.980 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:12.980 "strip_size_kb": 64, 00:14:12.980 "state": "online", 00:14:12.980 "raid_level": "raid5f", 00:14:12.980 "superblock": true, 00:14:12.980 "num_base_bdevs": 4, 00:14:12.980 "num_base_bdevs_discovered": 4, 00:14:12.980 "num_base_bdevs_operational": 4, 00:14:12.980 "base_bdevs_list": [ 00:14:12.980 { 00:14:12.980 "name": "pt1", 00:14:12.980 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:12.980 "is_configured": true, 00:14:12.980 "data_offset": 2048, 00:14:12.980 "data_size": 63488 00:14:12.980 }, 00:14:12.980 { 00:14:12.980 "name": "pt2", 00:14:12.980 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:12.980 "is_configured": true, 00:14:12.980 "data_offset": 2048, 00:14:12.980 "data_size": 63488 00:14:12.980 }, 00:14:12.980 { 00:14:12.980 "name": "pt3", 00:14:12.980 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:12.980 "is_configured": true, 00:14:12.980 "data_offset": 2048, 00:14:12.980 "data_size": 63488 00:14:12.980 }, 00:14:12.980 { 00:14:12.980 "name": "pt4", 00:14:12.980 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:12.980 "is_configured": true, 00:14:12.980 "data_offset": 2048, 00:14:12.980 "data_size": 63488 00:14:12.980 } 00:14:12.980 ] 00:14:12.980 } 00:14:12.980 } 00:14:12.980 }' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:12.980 pt2 00:14:12.980 pt3 00:14:12.980 pt4' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.980 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.239 [2024-12-07 01:58:18.522773] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 808654a6-7b82-4efe-b382-ea96de8edfae '!=' 808654a6-7b82-4efe-b382-ea96de8edfae ']' 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.239 [2024-12-07 01:58:18.566550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:13.239 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.240 "name": "raid_bdev1", 00:14:13.240 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:13.240 "strip_size_kb": 64, 00:14:13.240 "state": "online", 00:14:13.240 "raid_level": "raid5f", 00:14:13.240 "superblock": true, 00:14:13.240 "num_base_bdevs": 4, 00:14:13.240 "num_base_bdevs_discovered": 3, 00:14:13.240 "num_base_bdevs_operational": 3, 00:14:13.240 "base_bdevs_list": [ 00:14:13.240 { 00:14:13.240 "name": null, 00:14:13.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.240 "is_configured": false, 00:14:13.240 "data_offset": 0, 00:14:13.240 "data_size": 63488 00:14:13.240 }, 00:14:13.240 { 00:14:13.240 "name": "pt2", 00:14:13.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.240 "is_configured": true, 00:14:13.240 "data_offset": 2048, 00:14:13.240 "data_size": 63488 00:14:13.240 }, 00:14:13.240 { 00:14:13.240 "name": "pt3", 00:14:13.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.240 "is_configured": true, 00:14:13.240 "data_offset": 2048, 00:14:13.240 "data_size": 63488 00:14:13.240 }, 00:14:13.240 { 00:14:13.240 "name": "pt4", 00:14:13.240 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:13.240 "is_configured": true, 00:14:13.240 "data_offset": 2048, 00:14:13.240 "data_size": 63488 00:14:13.240 } 00:14:13.240 ] 00:14:13.240 }' 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.240 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.499 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:13.499 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.499 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.757 [2024-12-07 01:58:18.961848] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.757 [2024-12-07 01:58:18.961871] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.757 [2024-12-07 01:58:18.961933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.758 [2024-12-07 01:58:18.961998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.758 [2024-12-07 01:58:18.962010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:13.758 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.758 01:58:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:13.758 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 01:58:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 [2024-12-07 01:58:19.049725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:13.758 [2024-12-07 01:58:19.049768] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.758 [2024-12-07 01:58:19.049784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:13.758 [2024-12-07 01:58:19.049794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.758 [2024-12-07 01:58:19.051905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.758 [2024-12-07 01:58:19.051997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:13.758 [2024-12-07 01:58:19.052068] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:13.758 [2024-12-07 01:58:19.052111] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:13.758 pt2 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.758 "name": "raid_bdev1", 00:14:13.758 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:13.758 "strip_size_kb": 64, 00:14:13.758 "state": "configuring", 00:14:13.758 "raid_level": "raid5f", 00:14:13.758 "superblock": true, 00:14:13.758 "num_base_bdevs": 4, 00:14:13.758 "num_base_bdevs_discovered": 1, 00:14:13.758 "num_base_bdevs_operational": 3, 00:14:13.758 "base_bdevs_list": [ 00:14:13.758 { 00:14:13.758 "name": null, 00:14:13.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.758 "is_configured": false, 00:14:13.758 "data_offset": 2048, 00:14:13.758 "data_size": 63488 00:14:13.758 }, 00:14:13.758 { 00:14:13.758 "name": "pt2", 00:14:13.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.758 "is_configured": true, 00:14:13.758 "data_offset": 2048, 00:14:13.758 "data_size": 63488 00:14:13.758 }, 00:14:13.758 { 00:14:13.758 "name": null, 00:14:13.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:13.758 "is_configured": false, 00:14:13.758 "data_offset": 2048, 00:14:13.758 "data_size": 63488 00:14:13.758 }, 00:14:13.758 { 00:14:13.758 "name": null, 00:14:13.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:13.758 "is_configured": false, 00:14:13.758 "data_offset": 2048, 00:14:13.758 "data_size": 63488 00:14:13.758 } 00:14:13.758 ] 00:14:13.758 }' 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.758 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.327 [2024-12-07 01:58:19.508917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:14.327 [2024-12-07 01:58:19.509014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.327 [2024-12-07 01:58:19.509047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:14.327 [2024-12-07 01:58:19.509079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.327 [2024-12-07 01:58:19.509481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.327 [2024-12-07 01:58:19.509539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:14.327 [2024-12-07 01:58:19.509627] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:14.327 [2024-12-07 01:58:19.509686] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:14.327 pt3 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.327 "name": "raid_bdev1", 00:14:14.327 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:14.327 "strip_size_kb": 64, 00:14:14.327 "state": "configuring", 00:14:14.327 "raid_level": "raid5f", 00:14:14.327 "superblock": true, 00:14:14.327 "num_base_bdevs": 4, 00:14:14.327 "num_base_bdevs_discovered": 2, 00:14:14.327 "num_base_bdevs_operational": 3, 00:14:14.327 "base_bdevs_list": [ 00:14:14.327 { 00:14:14.327 "name": null, 00:14:14.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.327 "is_configured": false, 00:14:14.327 "data_offset": 2048, 00:14:14.327 "data_size": 63488 00:14:14.327 }, 00:14:14.327 { 00:14:14.327 "name": "pt2", 00:14:14.327 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.327 "is_configured": true, 00:14:14.327 "data_offset": 2048, 00:14:14.327 "data_size": 63488 00:14:14.327 }, 00:14:14.327 { 00:14:14.327 "name": "pt3", 00:14:14.327 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.327 "is_configured": true, 00:14:14.327 "data_offset": 2048, 00:14:14.327 "data_size": 63488 00:14:14.327 }, 00:14:14.327 { 00:14:14.327 "name": null, 00:14:14.327 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:14.327 "is_configured": false, 00:14:14.327 "data_offset": 2048, 00:14:14.327 "data_size": 63488 00:14:14.327 } 00:14:14.327 ] 00:14:14.327 }' 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.327 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.586 [2024-12-07 01:58:19.992098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:14.586 [2024-12-07 01:58:19.992150] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.586 [2024-12-07 01:58:19.992171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:14.586 [2024-12-07 01:58:19.992181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.586 [2024-12-07 01:58:19.992546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.586 [2024-12-07 01:58:19.992594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:14.586 [2024-12-07 01:58:19.992675] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:14.586 [2024-12-07 01:58:19.992699] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:14.586 [2024-12-07 01:58:19.992849] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:14.586 [2024-12-07 01:58:19.992866] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:14.586 [2024-12-07 01:58:19.993108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:14.586 [2024-12-07 01:58:19.993630] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:14.586 [2024-12-07 01:58:19.993640] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:14.586 [2024-12-07 01:58:19.993854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.586 pt4 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.586 01:58:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.586 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.586 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.586 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.586 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.586 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.845 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.845 "name": "raid_bdev1", 00:14:14.845 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:14.845 "strip_size_kb": 64, 00:14:14.845 "state": "online", 00:14:14.845 "raid_level": "raid5f", 00:14:14.845 "superblock": true, 00:14:14.845 "num_base_bdevs": 4, 00:14:14.845 "num_base_bdevs_discovered": 3, 00:14:14.845 "num_base_bdevs_operational": 3, 00:14:14.845 "base_bdevs_list": [ 00:14:14.845 { 00:14:14.845 "name": null, 00:14:14.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.845 "is_configured": false, 00:14:14.845 "data_offset": 2048, 00:14:14.845 "data_size": 63488 00:14:14.845 }, 00:14:14.845 { 00:14:14.845 "name": "pt2", 00:14:14.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:14.845 "is_configured": true, 00:14:14.845 "data_offset": 2048, 00:14:14.845 "data_size": 63488 00:14:14.845 }, 00:14:14.845 { 00:14:14.845 "name": "pt3", 00:14:14.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:14.845 "is_configured": true, 00:14:14.845 "data_offset": 2048, 00:14:14.845 "data_size": 63488 00:14:14.845 }, 00:14:14.845 { 00:14:14.845 "name": "pt4", 00:14:14.845 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:14.845 "is_configured": true, 00:14:14.845 "data_offset": 2048, 00:14:14.845 "data_size": 63488 00:14:14.845 } 00:14:14.845 ] 00:14:14.845 }' 00:14:14.845 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.845 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.104 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:15.104 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.104 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.104 [2024-12-07 01:58:20.419463] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.104 [2024-12-07 01:58:20.419536] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:15.104 [2024-12-07 01:58:20.419622] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.105 [2024-12-07 01:58:20.419744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.105 [2024-12-07 01:58:20.419807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.105 [2024-12-07 01:58:20.495353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:15.105 [2024-12-07 01:58:20.495436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.105 [2024-12-07 01:58:20.495471] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:15.105 [2024-12-07 01:58:20.495517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.105 [2024-12-07 01:58:20.497686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.105 [2024-12-07 01:58:20.497758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:15.105 [2024-12-07 01:58:20.497862] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:15.105 [2024-12-07 01:58:20.497938] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:15.105 [2024-12-07 01:58:20.498120] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:15.105 [2024-12-07 01:58:20.498173] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:15.105 [2024-12-07 01:58:20.498200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:15.105 [2024-12-07 01:58:20.498233] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.105 [2024-12-07 01:58:20.498328] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:15.105 pt1 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.105 "name": "raid_bdev1", 00:14:15.105 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:15.105 "strip_size_kb": 64, 00:14:15.105 "state": "configuring", 00:14:15.105 "raid_level": "raid5f", 00:14:15.105 "superblock": true, 00:14:15.105 "num_base_bdevs": 4, 00:14:15.105 "num_base_bdevs_discovered": 2, 00:14:15.105 "num_base_bdevs_operational": 3, 00:14:15.105 "base_bdevs_list": [ 00:14:15.105 { 00:14:15.105 "name": null, 00:14:15.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.105 "is_configured": false, 00:14:15.105 "data_offset": 2048, 00:14:15.105 "data_size": 63488 00:14:15.105 }, 00:14:15.105 { 00:14:15.105 "name": "pt2", 00:14:15.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.105 "is_configured": true, 00:14:15.105 "data_offset": 2048, 00:14:15.105 "data_size": 63488 00:14:15.105 }, 00:14:15.105 { 00:14:15.105 "name": "pt3", 00:14:15.105 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.105 "is_configured": true, 00:14:15.105 "data_offset": 2048, 00:14:15.105 "data_size": 63488 00:14:15.105 }, 00:14:15.105 { 00:14:15.105 "name": null, 00:14:15.105 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.105 "is_configured": false, 00:14:15.105 "data_offset": 2048, 00:14:15.105 "data_size": 63488 00:14:15.105 } 00:14:15.105 ] 00:14:15.105 }' 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.105 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.673 [2024-12-07 01:58:20.974532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:15.673 [2024-12-07 01:58:20.974654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.673 [2024-12-07 01:58:20.974704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:15.673 [2024-12-07 01:58:20.974740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.673 [2024-12-07 01:58:20.975119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.673 [2024-12-07 01:58:20.975183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:15.673 [2024-12-07 01:58:20.975281] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:15.673 [2024-12-07 01:58:20.975344] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:15.673 [2024-12-07 01:58:20.975473] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:15.673 [2024-12-07 01:58:20.975512] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:15.673 [2024-12-07 01:58:20.975832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:15.673 [2024-12-07 01:58:20.976479] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:15.673 [2024-12-07 01:58:20.976536] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:15.673 [2024-12-07 01:58:20.976800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.673 pt4 00:14:15.673 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.674 01:58:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.674 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.674 "name": "raid_bdev1", 00:14:15.674 "uuid": "808654a6-7b82-4efe-b382-ea96de8edfae", 00:14:15.674 "strip_size_kb": 64, 00:14:15.674 "state": "online", 00:14:15.674 "raid_level": "raid5f", 00:14:15.674 "superblock": true, 00:14:15.674 "num_base_bdevs": 4, 00:14:15.674 "num_base_bdevs_discovered": 3, 00:14:15.674 "num_base_bdevs_operational": 3, 00:14:15.674 "base_bdevs_list": [ 00:14:15.674 { 00:14:15.674 "name": null, 00:14:15.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.674 "is_configured": false, 00:14:15.674 "data_offset": 2048, 00:14:15.674 "data_size": 63488 00:14:15.674 }, 00:14:15.674 { 00:14:15.674 "name": "pt2", 00:14:15.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.674 "is_configured": true, 00:14:15.674 "data_offset": 2048, 00:14:15.674 "data_size": 63488 00:14:15.674 }, 00:14:15.674 { 00:14:15.674 "name": "pt3", 00:14:15.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:15.674 "is_configured": true, 00:14:15.674 "data_offset": 2048, 00:14:15.674 "data_size": 63488 00:14:15.674 }, 00:14:15.674 { 00:14:15.674 "name": "pt4", 00:14:15.674 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:15.674 "is_configured": true, 00:14:15.674 "data_offset": 2048, 00:14:15.674 "data_size": 63488 00:14:15.674 } 00:14:15.674 ] 00:14:15.674 }' 00:14:15.674 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.674 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.242 [2024-12-07 01:58:21.461895] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 808654a6-7b82-4efe-b382-ea96de8edfae '!=' 808654a6-7b82-4efe-b382-ea96de8edfae ']' 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94240 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 94240 ']' 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 94240 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94240 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94240' 00:14:16.242 killing process with pid 94240 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 94240 00:14:16.242 [2024-12-07 01:58:21.526563] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.242 [2024-12-07 01:58:21.526640] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.242 [2024-12-07 01:58:21.526726] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.242 [2024-12-07 01:58:21.526736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:16.242 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 94240 00:14:16.242 [2024-12-07 01:58:21.569464] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.502 01:58:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:16.502 00:14:16.502 real 0m6.975s 00:14:16.502 user 0m11.713s 00:14:16.502 sys 0m1.517s 00:14:16.502 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:16.502 01:58:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.502 ************************************ 00:14:16.502 END TEST raid5f_superblock_test 00:14:16.502 ************************************ 00:14:16.502 01:58:21 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:16.502 01:58:21 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:16.502 01:58:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:16.502 01:58:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.502 01:58:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.502 ************************************ 00:14:16.502 START TEST raid5f_rebuild_test 00:14:16.502 ************************************ 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:16.502 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94710 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94710 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 94710 ']' 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:16.503 01:58:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.763 [2024-12-07 01:58:21.985014] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:16.763 [2024-12-07 01:58:21.985200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94710 ] 00:14:16.763 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:16.763 Zero copy mechanism will not be used. 00:14:16.763 [2024-12-07 01:58:22.130391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.763 [2024-12-07 01:58:22.174134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.763 [2024-12-07 01:58:22.214795] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:16.763 [2024-12-07 01:58:22.214910] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.725 BaseBdev1_malloc 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.725 [2024-12-07 01:58:22.820897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:17.725 [2024-12-07 01:58:22.820993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.725 [2024-12-07 01:58:22.821035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:17.725 [2024-12-07 01:58:22.821068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.725 [2024-12-07 01:58:22.823206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.725 [2024-12-07 01:58:22.823276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.725 BaseBdev1 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.725 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.725 BaseBdev2_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 [2024-12-07 01:58:22.865621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:17.726 [2024-12-07 01:58:22.865839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.726 [2024-12-07 01:58:22.865932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:17.726 [2024-12-07 01:58:22.866011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.726 [2024-12-07 01:58:22.870869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.726 [2024-12-07 01:58:22.870939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:17.726 BaseBdev2 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 BaseBdev3_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 [2024-12-07 01:58:22.896584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:17.726 [2024-12-07 01:58:22.896639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.726 [2024-12-07 01:58:22.896680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:17.726 [2024-12-07 01:58:22.896690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.726 [2024-12-07 01:58:22.898742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.726 [2024-12-07 01:58:22.898773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:17.726 BaseBdev3 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 BaseBdev4_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 [2024-12-07 01:58:22.924821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:17.726 [2024-12-07 01:58:22.924903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.726 [2024-12-07 01:58:22.924929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:17.726 [2024-12-07 01:58:22.924938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.726 [2024-12-07 01:58:22.926958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.726 [2024-12-07 01:58:22.926991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:17.726 BaseBdev4 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 spare_malloc 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 spare_delay 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 [2024-12-07 01:58:22.965056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:17.726 [2024-12-07 01:58:22.965099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.726 [2024-12-07 01:58:22.965118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:17.726 [2024-12-07 01:58:22.965126] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.726 [2024-12-07 01:58:22.967151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.726 [2024-12-07 01:58:22.967187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:17.726 spare 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.726 [2024-12-07 01:58:22.977112] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.726 [2024-12-07 01:58:22.978900] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:17.726 [2024-12-07 01:58:22.978957] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.726 [2024-12-07 01:58:22.979004] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:17.726 [2024-12-07 01:58:22.979085] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:17.726 [2024-12-07 01:58:22.979099] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:17.726 [2024-12-07 01:58:22.979343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:17.726 [2024-12-07 01:58:22.979794] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:17.726 [2024-12-07 01:58:22.979826] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:17.726 [2024-12-07 01:58:22.979950] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.726 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.727 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.727 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.727 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.727 01:58:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.727 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.727 01:58:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.727 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.727 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.727 "name": "raid_bdev1", 00:14:17.727 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:17.727 "strip_size_kb": 64, 00:14:17.727 "state": "online", 00:14:17.727 "raid_level": "raid5f", 00:14:17.727 "superblock": false, 00:14:17.727 "num_base_bdevs": 4, 00:14:17.727 "num_base_bdevs_discovered": 4, 00:14:17.727 "num_base_bdevs_operational": 4, 00:14:17.727 "base_bdevs_list": [ 00:14:17.727 { 00:14:17.727 "name": "BaseBdev1", 00:14:17.727 "uuid": "a0918472-f002-53f4-b027-65cbd45e8084", 00:14:17.727 "is_configured": true, 00:14:17.727 "data_offset": 0, 00:14:17.727 "data_size": 65536 00:14:17.727 }, 00:14:17.727 { 00:14:17.727 "name": "BaseBdev2", 00:14:17.727 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:17.727 "is_configured": true, 00:14:17.727 "data_offset": 0, 00:14:17.727 "data_size": 65536 00:14:17.727 }, 00:14:17.727 { 00:14:17.727 "name": "BaseBdev3", 00:14:17.727 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:17.727 "is_configured": true, 00:14:17.727 "data_offset": 0, 00:14:17.727 "data_size": 65536 00:14:17.727 }, 00:14:17.727 { 00:14:17.727 "name": "BaseBdev4", 00:14:17.727 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:17.727 "is_configured": true, 00:14:17.727 "data_offset": 0, 00:14:17.727 "data_size": 65536 00:14:17.727 } 00:14:17.727 ] 00:14:17.727 }' 00:14:17.727 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.727 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.986 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:17.986 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.986 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.986 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:17.986 [2024-12-07 01:58:23.437046] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.986 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:18.242 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:18.243 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.243 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:18.501 [2024-12-07 01:58:23.708435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:18.501 /dev/nbd0 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:18.501 1+0 records in 00:14:18.501 1+0 records out 00:14:18.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266317 s, 15.4 MB/s 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:18.501 01:58:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:18.759 512+0 records in 00:14:18.759 512+0 records out 00:14:18.759 100663296 bytes (101 MB, 96 MiB) copied, 0.405627 s, 248 MB/s 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:18.759 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:19.017 [2024-12-07 01:58:24.397429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.017 [2024-12-07 01:58:24.413495] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.017 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.018 "name": "raid_bdev1", 00:14:19.018 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:19.018 "strip_size_kb": 64, 00:14:19.018 "state": "online", 00:14:19.018 "raid_level": "raid5f", 00:14:19.018 "superblock": false, 00:14:19.018 "num_base_bdevs": 4, 00:14:19.018 "num_base_bdevs_discovered": 3, 00:14:19.018 "num_base_bdevs_operational": 3, 00:14:19.018 "base_bdevs_list": [ 00:14:19.018 { 00:14:19.018 "name": null, 00:14:19.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.018 "is_configured": false, 00:14:19.018 "data_offset": 0, 00:14:19.018 "data_size": 65536 00:14:19.018 }, 00:14:19.018 { 00:14:19.018 "name": "BaseBdev2", 00:14:19.018 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:19.018 "is_configured": true, 00:14:19.018 "data_offset": 0, 00:14:19.018 "data_size": 65536 00:14:19.018 }, 00:14:19.018 { 00:14:19.018 "name": "BaseBdev3", 00:14:19.018 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:19.018 "is_configured": true, 00:14:19.018 "data_offset": 0, 00:14:19.018 "data_size": 65536 00:14:19.018 }, 00:14:19.018 { 00:14:19.018 "name": "BaseBdev4", 00:14:19.018 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:19.018 "is_configured": true, 00:14:19.018 "data_offset": 0, 00:14:19.018 "data_size": 65536 00:14:19.018 } 00:14:19.018 ] 00:14:19.018 }' 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.018 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.582 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.582 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.582 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.582 [2024-12-07 01:58:24.860764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.582 [2024-12-07 01:58:24.864058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:19.582 [2024-12-07 01:58:24.866092] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.583 01:58:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.583 01:58:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.526 "name": "raid_bdev1", 00:14:20.526 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:20.526 "strip_size_kb": 64, 00:14:20.526 "state": "online", 00:14:20.526 "raid_level": "raid5f", 00:14:20.526 "superblock": false, 00:14:20.526 "num_base_bdevs": 4, 00:14:20.526 "num_base_bdevs_discovered": 4, 00:14:20.526 "num_base_bdevs_operational": 4, 00:14:20.526 "process": { 00:14:20.526 "type": "rebuild", 00:14:20.526 "target": "spare", 00:14:20.526 "progress": { 00:14:20.526 "blocks": 19200, 00:14:20.526 "percent": 9 00:14:20.526 } 00:14:20.526 }, 00:14:20.526 "base_bdevs_list": [ 00:14:20.526 { 00:14:20.526 "name": "spare", 00:14:20.526 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:20.526 "is_configured": true, 00:14:20.526 "data_offset": 0, 00:14:20.526 "data_size": 65536 00:14:20.526 }, 00:14:20.526 { 00:14:20.526 "name": "BaseBdev2", 00:14:20.526 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:20.526 "is_configured": true, 00:14:20.526 "data_offset": 0, 00:14:20.526 "data_size": 65536 00:14:20.526 }, 00:14:20.526 { 00:14:20.526 "name": "BaseBdev3", 00:14:20.526 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:20.526 "is_configured": true, 00:14:20.526 "data_offset": 0, 00:14:20.526 "data_size": 65536 00:14:20.526 }, 00:14:20.526 { 00:14:20.526 "name": "BaseBdev4", 00:14:20.526 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:20.526 "is_configured": true, 00:14:20.526 "data_offset": 0, 00:14:20.526 "data_size": 65536 00:14:20.526 } 00:14:20.526 ] 00:14:20.526 }' 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.526 01:58:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.785 [2024-12-07 01:58:26.028598] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.785 [2024-12-07 01:58:26.071385] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:20.785 [2024-12-07 01:58:26.071455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.785 [2024-12-07 01:58:26.071473] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.785 [2024-12-07 01:58:26.071480] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.785 "name": "raid_bdev1", 00:14:20.785 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:20.785 "strip_size_kb": 64, 00:14:20.785 "state": "online", 00:14:20.785 "raid_level": "raid5f", 00:14:20.785 "superblock": false, 00:14:20.785 "num_base_bdevs": 4, 00:14:20.785 "num_base_bdevs_discovered": 3, 00:14:20.785 "num_base_bdevs_operational": 3, 00:14:20.785 "base_bdevs_list": [ 00:14:20.785 { 00:14:20.785 "name": null, 00:14:20.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.785 "is_configured": false, 00:14:20.785 "data_offset": 0, 00:14:20.785 "data_size": 65536 00:14:20.785 }, 00:14:20.785 { 00:14:20.785 "name": "BaseBdev2", 00:14:20.785 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:20.785 "is_configured": true, 00:14:20.785 "data_offset": 0, 00:14:20.785 "data_size": 65536 00:14:20.785 }, 00:14:20.785 { 00:14:20.785 "name": "BaseBdev3", 00:14:20.785 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:20.785 "is_configured": true, 00:14:20.785 "data_offset": 0, 00:14:20.785 "data_size": 65536 00:14:20.785 }, 00:14:20.785 { 00:14:20.785 "name": "BaseBdev4", 00:14:20.785 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:20.785 "is_configured": true, 00:14:20.785 "data_offset": 0, 00:14:20.785 "data_size": 65536 00:14:20.785 } 00:14:20.785 ] 00:14:20.785 }' 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.785 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.350 "name": "raid_bdev1", 00:14:21.350 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:21.350 "strip_size_kb": 64, 00:14:21.350 "state": "online", 00:14:21.350 "raid_level": "raid5f", 00:14:21.350 "superblock": false, 00:14:21.350 "num_base_bdevs": 4, 00:14:21.350 "num_base_bdevs_discovered": 3, 00:14:21.350 "num_base_bdevs_operational": 3, 00:14:21.350 "base_bdevs_list": [ 00:14:21.350 { 00:14:21.350 "name": null, 00:14:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.350 "is_configured": false, 00:14:21.350 "data_offset": 0, 00:14:21.350 "data_size": 65536 00:14:21.350 }, 00:14:21.350 { 00:14:21.350 "name": "BaseBdev2", 00:14:21.350 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:21.350 "is_configured": true, 00:14:21.350 "data_offset": 0, 00:14:21.350 "data_size": 65536 00:14:21.350 }, 00:14:21.350 { 00:14:21.350 "name": "BaseBdev3", 00:14:21.350 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:21.350 "is_configured": true, 00:14:21.350 "data_offset": 0, 00:14:21.350 "data_size": 65536 00:14:21.350 }, 00:14:21.350 { 00:14:21.350 "name": "BaseBdev4", 00:14:21.350 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:21.350 "is_configured": true, 00:14:21.350 "data_offset": 0, 00:14:21.350 "data_size": 65536 00:14:21.350 } 00:14:21.350 ] 00:14:21.350 }' 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.350 [2024-12-07 01:58:26.695863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.350 [2024-12-07 01:58:26.698939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:14:21.350 [2024-12-07 01:58:26.701028] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.350 01:58:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.282 01:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.540 "name": "raid_bdev1", 00:14:22.540 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:22.540 "strip_size_kb": 64, 00:14:22.540 "state": "online", 00:14:22.540 "raid_level": "raid5f", 00:14:22.540 "superblock": false, 00:14:22.540 "num_base_bdevs": 4, 00:14:22.540 "num_base_bdevs_discovered": 4, 00:14:22.540 "num_base_bdevs_operational": 4, 00:14:22.540 "process": { 00:14:22.540 "type": "rebuild", 00:14:22.540 "target": "spare", 00:14:22.540 "progress": { 00:14:22.540 "blocks": 19200, 00:14:22.540 "percent": 9 00:14:22.540 } 00:14:22.540 }, 00:14:22.540 "base_bdevs_list": [ 00:14:22.540 { 00:14:22.540 "name": "spare", 00:14:22.540 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 }, 00:14:22.540 { 00:14:22.540 "name": "BaseBdev2", 00:14:22.540 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 }, 00:14:22.540 { 00:14:22.540 "name": "BaseBdev3", 00:14:22.540 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 }, 00:14:22.540 { 00:14:22.540 "name": "BaseBdev4", 00:14:22.540 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 } 00:14:22.540 ] 00:14:22.540 }' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.540 "name": "raid_bdev1", 00:14:22.540 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:22.540 "strip_size_kb": 64, 00:14:22.540 "state": "online", 00:14:22.540 "raid_level": "raid5f", 00:14:22.540 "superblock": false, 00:14:22.540 "num_base_bdevs": 4, 00:14:22.540 "num_base_bdevs_discovered": 4, 00:14:22.540 "num_base_bdevs_operational": 4, 00:14:22.540 "process": { 00:14:22.540 "type": "rebuild", 00:14:22.540 "target": "spare", 00:14:22.540 "progress": { 00:14:22.540 "blocks": 21120, 00:14:22.540 "percent": 10 00:14:22.540 } 00:14:22.540 }, 00:14:22.540 "base_bdevs_list": [ 00:14:22.540 { 00:14:22.540 "name": "spare", 00:14:22.540 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 }, 00:14:22.540 { 00:14:22.540 "name": "BaseBdev2", 00:14:22.540 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 }, 00:14:22.540 { 00:14:22.540 "name": "BaseBdev3", 00:14:22.540 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 }, 00:14:22.540 { 00:14:22.540 "name": "BaseBdev4", 00:14:22.540 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:22.540 "is_configured": true, 00:14:22.540 "data_offset": 0, 00:14:22.540 "data_size": 65536 00:14:22.540 } 00:14:22.540 ] 00:14:22.540 }' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.540 01:58:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.797 01:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.797 01:58:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.730 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.730 "name": "raid_bdev1", 00:14:23.730 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:23.730 "strip_size_kb": 64, 00:14:23.730 "state": "online", 00:14:23.730 "raid_level": "raid5f", 00:14:23.730 "superblock": false, 00:14:23.730 "num_base_bdevs": 4, 00:14:23.730 "num_base_bdevs_discovered": 4, 00:14:23.730 "num_base_bdevs_operational": 4, 00:14:23.730 "process": { 00:14:23.730 "type": "rebuild", 00:14:23.730 "target": "spare", 00:14:23.730 "progress": { 00:14:23.730 "blocks": 44160, 00:14:23.730 "percent": 22 00:14:23.730 } 00:14:23.730 }, 00:14:23.730 "base_bdevs_list": [ 00:14:23.730 { 00:14:23.730 "name": "spare", 00:14:23.730 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:23.730 "is_configured": true, 00:14:23.730 "data_offset": 0, 00:14:23.730 "data_size": 65536 00:14:23.730 }, 00:14:23.730 { 00:14:23.730 "name": "BaseBdev2", 00:14:23.730 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:23.730 "is_configured": true, 00:14:23.730 "data_offset": 0, 00:14:23.730 "data_size": 65536 00:14:23.730 }, 00:14:23.730 { 00:14:23.730 "name": "BaseBdev3", 00:14:23.730 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:23.730 "is_configured": true, 00:14:23.730 "data_offset": 0, 00:14:23.730 "data_size": 65536 00:14:23.730 }, 00:14:23.730 { 00:14:23.730 "name": "BaseBdev4", 00:14:23.730 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:23.730 "is_configured": true, 00:14:23.731 "data_offset": 0, 00:14:23.731 "data_size": 65536 00:14:23.731 } 00:14:23.731 ] 00:14:23.731 }' 00:14:23.731 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.731 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.731 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.731 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.731 01:58:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.105 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.105 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.105 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.105 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.106 "name": "raid_bdev1", 00:14:25.106 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:25.106 "strip_size_kb": 64, 00:14:25.106 "state": "online", 00:14:25.106 "raid_level": "raid5f", 00:14:25.106 "superblock": false, 00:14:25.106 "num_base_bdevs": 4, 00:14:25.106 "num_base_bdevs_discovered": 4, 00:14:25.106 "num_base_bdevs_operational": 4, 00:14:25.106 "process": { 00:14:25.106 "type": "rebuild", 00:14:25.106 "target": "spare", 00:14:25.106 "progress": { 00:14:25.106 "blocks": 65280, 00:14:25.106 "percent": 33 00:14:25.106 } 00:14:25.106 }, 00:14:25.106 "base_bdevs_list": [ 00:14:25.106 { 00:14:25.106 "name": "spare", 00:14:25.106 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:25.106 "is_configured": true, 00:14:25.106 "data_offset": 0, 00:14:25.106 "data_size": 65536 00:14:25.106 }, 00:14:25.106 { 00:14:25.106 "name": "BaseBdev2", 00:14:25.106 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:25.106 "is_configured": true, 00:14:25.106 "data_offset": 0, 00:14:25.106 "data_size": 65536 00:14:25.106 }, 00:14:25.106 { 00:14:25.106 "name": "BaseBdev3", 00:14:25.106 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:25.106 "is_configured": true, 00:14:25.106 "data_offset": 0, 00:14:25.106 "data_size": 65536 00:14:25.106 }, 00:14:25.106 { 00:14:25.106 "name": "BaseBdev4", 00:14:25.106 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:25.106 "is_configured": true, 00:14:25.106 "data_offset": 0, 00:14:25.106 "data_size": 65536 00:14:25.106 } 00:14:25.106 ] 00:14:25.106 }' 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.106 01:58:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.042 "name": "raid_bdev1", 00:14:26.042 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:26.042 "strip_size_kb": 64, 00:14:26.042 "state": "online", 00:14:26.042 "raid_level": "raid5f", 00:14:26.042 "superblock": false, 00:14:26.042 "num_base_bdevs": 4, 00:14:26.042 "num_base_bdevs_discovered": 4, 00:14:26.042 "num_base_bdevs_operational": 4, 00:14:26.042 "process": { 00:14:26.042 "type": "rebuild", 00:14:26.042 "target": "spare", 00:14:26.042 "progress": { 00:14:26.042 "blocks": 86400, 00:14:26.042 "percent": 43 00:14:26.042 } 00:14:26.042 }, 00:14:26.042 "base_bdevs_list": [ 00:14:26.042 { 00:14:26.042 "name": "spare", 00:14:26.042 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:26.042 "is_configured": true, 00:14:26.042 "data_offset": 0, 00:14:26.042 "data_size": 65536 00:14:26.042 }, 00:14:26.042 { 00:14:26.042 "name": "BaseBdev2", 00:14:26.042 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:26.042 "is_configured": true, 00:14:26.042 "data_offset": 0, 00:14:26.042 "data_size": 65536 00:14:26.042 }, 00:14:26.042 { 00:14:26.042 "name": "BaseBdev3", 00:14:26.042 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:26.042 "is_configured": true, 00:14:26.042 "data_offset": 0, 00:14:26.042 "data_size": 65536 00:14:26.042 }, 00:14:26.042 { 00:14:26.042 "name": "BaseBdev4", 00:14:26.042 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:26.042 "is_configured": true, 00:14:26.042 "data_offset": 0, 00:14:26.042 "data_size": 65536 00:14:26.042 } 00:14:26.042 ] 00:14:26.042 }' 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.042 01:58:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.977 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.977 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.977 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.977 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.978 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.978 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.978 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.978 01:58:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.978 01:58:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.978 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.236 "name": "raid_bdev1", 00:14:27.236 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:27.236 "strip_size_kb": 64, 00:14:27.236 "state": "online", 00:14:27.236 "raid_level": "raid5f", 00:14:27.236 "superblock": false, 00:14:27.236 "num_base_bdevs": 4, 00:14:27.236 "num_base_bdevs_discovered": 4, 00:14:27.236 "num_base_bdevs_operational": 4, 00:14:27.236 "process": { 00:14:27.236 "type": "rebuild", 00:14:27.236 "target": "spare", 00:14:27.236 "progress": { 00:14:27.236 "blocks": 109440, 00:14:27.236 "percent": 55 00:14:27.236 } 00:14:27.236 }, 00:14:27.236 "base_bdevs_list": [ 00:14:27.236 { 00:14:27.236 "name": "spare", 00:14:27.236 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:27.236 "is_configured": true, 00:14:27.236 "data_offset": 0, 00:14:27.236 "data_size": 65536 00:14:27.236 }, 00:14:27.236 { 00:14:27.236 "name": "BaseBdev2", 00:14:27.236 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:27.236 "is_configured": true, 00:14:27.236 "data_offset": 0, 00:14:27.236 "data_size": 65536 00:14:27.236 }, 00:14:27.236 { 00:14:27.236 "name": "BaseBdev3", 00:14:27.236 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:27.236 "is_configured": true, 00:14:27.236 "data_offset": 0, 00:14:27.236 "data_size": 65536 00:14:27.236 }, 00:14:27.236 { 00:14:27.236 "name": "BaseBdev4", 00:14:27.236 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:27.236 "is_configured": true, 00:14:27.236 "data_offset": 0, 00:14:27.236 "data_size": 65536 00:14:27.236 } 00:14:27.236 ] 00:14:27.236 }' 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.236 01:58:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.173 "name": "raid_bdev1", 00:14:28.173 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:28.173 "strip_size_kb": 64, 00:14:28.173 "state": "online", 00:14:28.173 "raid_level": "raid5f", 00:14:28.173 "superblock": false, 00:14:28.173 "num_base_bdevs": 4, 00:14:28.173 "num_base_bdevs_discovered": 4, 00:14:28.173 "num_base_bdevs_operational": 4, 00:14:28.173 "process": { 00:14:28.173 "type": "rebuild", 00:14:28.173 "target": "spare", 00:14:28.173 "progress": { 00:14:28.173 "blocks": 130560, 00:14:28.173 "percent": 66 00:14:28.173 } 00:14:28.173 }, 00:14:28.173 "base_bdevs_list": [ 00:14:28.173 { 00:14:28.173 "name": "spare", 00:14:28.173 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:28.173 "is_configured": true, 00:14:28.173 "data_offset": 0, 00:14:28.173 "data_size": 65536 00:14:28.173 }, 00:14:28.173 { 00:14:28.173 "name": "BaseBdev2", 00:14:28.173 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:28.173 "is_configured": true, 00:14:28.173 "data_offset": 0, 00:14:28.173 "data_size": 65536 00:14:28.173 }, 00:14:28.173 { 00:14:28.173 "name": "BaseBdev3", 00:14:28.173 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:28.173 "is_configured": true, 00:14:28.173 "data_offset": 0, 00:14:28.173 "data_size": 65536 00:14:28.173 }, 00:14:28.173 { 00:14:28.173 "name": "BaseBdev4", 00:14:28.173 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:28.173 "is_configured": true, 00:14:28.173 "data_offset": 0, 00:14:28.173 "data_size": 65536 00:14:28.173 } 00:14:28.173 ] 00:14:28.173 }' 00:14:28.173 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.432 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.432 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.432 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.432 01:58:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.367 "name": "raid_bdev1", 00:14:29.367 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:29.367 "strip_size_kb": 64, 00:14:29.367 "state": "online", 00:14:29.367 "raid_level": "raid5f", 00:14:29.367 "superblock": false, 00:14:29.367 "num_base_bdevs": 4, 00:14:29.367 "num_base_bdevs_discovered": 4, 00:14:29.367 "num_base_bdevs_operational": 4, 00:14:29.367 "process": { 00:14:29.367 "type": "rebuild", 00:14:29.367 "target": "spare", 00:14:29.367 "progress": { 00:14:29.367 "blocks": 153600, 00:14:29.367 "percent": 78 00:14:29.367 } 00:14:29.367 }, 00:14:29.367 "base_bdevs_list": [ 00:14:29.367 { 00:14:29.367 "name": "spare", 00:14:29.367 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:29.367 "is_configured": true, 00:14:29.367 "data_offset": 0, 00:14:29.367 "data_size": 65536 00:14:29.367 }, 00:14:29.367 { 00:14:29.367 "name": "BaseBdev2", 00:14:29.367 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:29.367 "is_configured": true, 00:14:29.367 "data_offset": 0, 00:14:29.367 "data_size": 65536 00:14:29.367 }, 00:14:29.367 { 00:14:29.367 "name": "BaseBdev3", 00:14:29.367 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:29.367 "is_configured": true, 00:14:29.367 "data_offset": 0, 00:14:29.367 "data_size": 65536 00:14:29.367 }, 00:14:29.367 { 00:14:29.367 "name": "BaseBdev4", 00:14:29.367 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:29.367 "is_configured": true, 00:14:29.367 "data_offset": 0, 00:14:29.367 "data_size": 65536 00:14:29.367 } 00:14:29.367 ] 00:14:29.367 }' 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.367 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.626 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.626 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.626 01:58:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.559 "name": "raid_bdev1", 00:14:30.559 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:30.559 "strip_size_kb": 64, 00:14:30.559 "state": "online", 00:14:30.559 "raid_level": "raid5f", 00:14:30.559 "superblock": false, 00:14:30.559 "num_base_bdevs": 4, 00:14:30.559 "num_base_bdevs_discovered": 4, 00:14:30.559 "num_base_bdevs_operational": 4, 00:14:30.559 "process": { 00:14:30.559 "type": "rebuild", 00:14:30.559 "target": "spare", 00:14:30.559 "progress": { 00:14:30.559 "blocks": 174720, 00:14:30.559 "percent": 88 00:14:30.559 } 00:14:30.559 }, 00:14:30.559 "base_bdevs_list": [ 00:14:30.559 { 00:14:30.559 "name": "spare", 00:14:30.559 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:30.559 "is_configured": true, 00:14:30.559 "data_offset": 0, 00:14:30.559 "data_size": 65536 00:14:30.559 }, 00:14:30.559 { 00:14:30.559 "name": "BaseBdev2", 00:14:30.559 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:30.559 "is_configured": true, 00:14:30.559 "data_offset": 0, 00:14:30.559 "data_size": 65536 00:14:30.559 }, 00:14:30.559 { 00:14:30.559 "name": "BaseBdev3", 00:14:30.559 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:30.559 "is_configured": true, 00:14:30.559 "data_offset": 0, 00:14:30.559 "data_size": 65536 00:14:30.559 }, 00:14:30.559 { 00:14:30.559 "name": "BaseBdev4", 00:14:30.559 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:30.559 "is_configured": true, 00:14:30.559 "data_offset": 0, 00:14:30.559 "data_size": 65536 00:14:30.559 } 00:14:30.559 ] 00:14:30.559 }' 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.559 01:58:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.817 01:58:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.817 01:58:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.752 [2024-12-07 01:58:37.043799] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.752 [2024-12-07 01:58:37.043894] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.752 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.753 [2024-12-07 01:58:37.043939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.753 "name": "raid_bdev1", 00:14:31.753 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:31.753 "strip_size_kb": 64, 00:14:31.753 "state": "online", 00:14:31.753 "raid_level": "raid5f", 00:14:31.753 "superblock": false, 00:14:31.753 "num_base_bdevs": 4, 00:14:31.753 "num_base_bdevs_discovered": 4, 00:14:31.753 "num_base_bdevs_operational": 4, 00:14:31.753 "base_bdevs_list": [ 00:14:31.753 { 00:14:31.753 "name": "spare", 00:14:31.753 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:31.753 "is_configured": true, 00:14:31.753 "data_offset": 0, 00:14:31.753 "data_size": 65536 00:14:31.753 }, 00:14:31.753 { 00:14:31.753 "name": "BaseBdev2", 00:14:31.753 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:31.753 "is_configured": true, 00:14:31.753 "data_offset": 0, 00:14:31.753 "data_size": 65536 00:14:31.753 }, 00:14:31.753 { 00:14:31.753 "name": "BaseBdev3", 00:14:31.753 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:31.753 "is_configured": true, 00:14:31.753 "data_offset": 0, 00:14:31.753 "data_size": 65536 00:14:31.753 }, 00:14:31.753 { 00:14:31.753 "name": "BaseBdev4", 00:14:31.753 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:31.753 "is_configured": true, 00:14:31.753 "data_offset": 0, 00:14:31.753 "data_size": 65536 00:14:31.753 } 00:14:31.753 ] 00:14:31.753 }' 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.753 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.011 "name": "raid_bdev1", 00:14:32.011 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:32.011 "strip_size_kb": 64, 00:14:32.011 "state": "online", 00:14:32.011 "raid_level": "raid5f", 00:14:32.011 "superblock": false, 00:14:32.011 "num_base_bdevs": 4, 00:14:32.011 "num_base_bdevs_discovered": 4, 00:14:32.011 "num_base_bdevs_operational": 4, 00:14:32.011 "base_bdevs_list": [ 00:14:32.011 { 00:14:32.011 "name": "spare", 00:14:32.011 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:32.011 "is_configured": true, 00:14:32.011 "data_offset": 0, 00:14:32.011 "data_size": 65536 00:14:32.011 }, 00:14:32.011 { 00:14:32.011 "name": "BaseBdev2", 00:14:32.011 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:32.011 "is_configured": true, 00:14:32.011 "data_offset": 0, 00:14:32.011 "data_size": 65536 00:14:32.011 }, 00:14:32.011 { 00:14:32.011 "name": "BaseBdev3", 00:14:32.011 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:32.011 "is_configured": true, 00:14:32.011 "data_offset": 0, 00:14:32.011 "data_size": 65536 00:14:32.011 }, 00:14:32.011 { 00:14:32.011 "name": "BaseBdev4", 00:14:32.011 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:32.011 "is_configured": true, 00:14:32.011 "data_offset": 0, 00:14:32.011 "data_size": 65536 00:14:32.011 } 00:14:32.011 ] 00:14:32.011 }' 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:32.011 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.012 "name": "raid_bdev1", 00:14:32.012 "uuid": "51c8d22b-dd24-41a9-9ad3-4748d14418f9", 00:14:32.012 "strip_size_kb": 64, 00:14:32.012 "state": "online", 00:14:32.012 "raid_level": "raid5f", 00:14:32.012 "superblock": false, 00:14:32.012 "num_base_bdevs": 4, 00:14:32.012 "num_base_bdevs_discovered": 4, 00:14:32.012 "num_base_bdevs_operational": 4, 00:14:32.012 "base_bdevs_list": [ 00:14:32.012 { 00:14:32.012 "name": "spare", 00:14:32.012 "uuid": "68a35c65-8f65-5dcd-9163-5bfc076081c9", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev2", 00:14:32.012 "uuid": "fc6dc13f-a774-5eb0-8112-e92b2ebf749d", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev3", 00:14:32.012 "uuid": "c2dc0168-9115-5f2e-ac99-ceb2968bb1a9", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev4", 00:14:32.012 "uuid": "3d20d45c-1504-5737-bc54-93050b726f49", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 }' 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.012 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 [2024-12-07 01:58:37.688052] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.271 [2024-12-07 01:58:37.688083] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.271 [2024-12-07 01:58:37.688160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.271 [2024-12-07 01:58:37.688252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.271 [2024-12-07 01:58:37.688273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.271 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.529 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.529 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.529 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:32.529 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:32.529 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.529 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:32.530 /dev/nbd0 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.530 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:32.789 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:32.789 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.789 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.789 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.789 1+0 records in 00:14:32.789 1+0 records out 00:14:32.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426029 s, 9.6 MB/s 00:14:32.789 01:58:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:32.789 /dev/nbd1 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.789 1+0 records in 00:14:32.789 1+0 records out 00:14:32.789 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416253 s, 9.8 MB/s 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:32.789 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.048 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94710 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 94710 ']' 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 94710 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94710 00:14:33.308 killing process with pid 94710 00:14:33.308 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.308 00:14:33.308 Latency(us) 00:14:33.308 [2024-12-07T01:58:38.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.308 [2024-12-07T01:58:38.770Z] =================================================================================================================== 00:14:33.308 [2024-12-07T01:58:38.770Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94710' 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 94710 00:14:33.308 [2024-12-07 01:58:38.748109] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.308 01:58:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 94710 00:14:33.571 [2024-12-07 01:58:38.798897] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:33.571 01:58:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:33.571 00:14:33.571 real 0m17.135s 00:14:33.571 user 0m20.790s 00:14:33.571 sys 0m2.256s 00:14:33.571 01:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:33.571 01:58:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.571 ************************************ 00:14:33.571 END TEST raid5f_rebuild_test 00:14:33.571 ************************************ 00:14:33.844 01:58:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:33.844 01:58:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:33.844 01:58:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:33.844 01:58:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:33.844 ************************************ 00:14:33.844 START TEST raid5f_rebuild_test_sb 00:14:33.844 ************************************ 00:14:33.844 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:14:33.844 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:33.844 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:33.844 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95191 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95191 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 95191 ']' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:33.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:33.845 01:58:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.845 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.845 Zero copy mechanism will not be used. 00:14:33.845 [2024-12-07 01:58:39.193366] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:33.845 [2024-12-07 01:58:39.193487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95191 ] 00:14:34.119 [2024-12-07 01:58:39.337299] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.119 [2024-12-07 01:58:39.381786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.119 [2024-12-07 01:58:39.424002] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.119 [2024-12-07 01:58:39.424044] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.688 BaseBdev1_malloc 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.688 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.688 [2024-12-07 01:58:40.033458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.689 [2024-12-07 01:58:40.033515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.689 [2024-12-07 01:58:40.033563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:34.689 [2024-12-07 01:58:40.033583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.689 [2024-12-07 01:58:40.035643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.689 [2024-12-07 01:58:40.035690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.689 BaseBdev1 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.689 BaseBdev2_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.689 [2024-12-07 01:58:40.082834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.689 [2024-12-07 01:58:40.082930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.689 [2024-12-07 01:58:40.082978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.689 [2024-12-07 01:58:40.083000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.689 [2024-12-07 01:58:40.087208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.689 [2024-12-07 01:58:40.087257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.689 BaseBdev2 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.689 BaseBdev3_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.689 [2024-12-07 01:58:40.113212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:34.689 [2024-12-07 01:58:40.113263] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.689 [2024-12-07 01:58:40.113307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.689 [2024-12-07 01:58:40.113316] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.689 [2024-12-07 01:58:40.115309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.689 [2024-12-07 01:58:40.115344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:34.689 BaseBdev3 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.689 BaseBdev4_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.689 [2024-12-07 01:58:40.141939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:34.689 [2024-12-07 01:58:40.141983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.689 [2024-12-07 01:58:40.142005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:34.689 [2024-12-07 01:58:40.142013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.689 [2024-12-07 01:58:40.144004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.689 [2024-12-07 01:58:40.144099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:34.689 BaseBdev4 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.689 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 spare_malloc 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 spare_delay 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 [2024-12-07 01:58:40.182636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:34.948 [2024-12-07 01:58:40.182732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.948 [2024-12-07 01:58:40.182758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:34.948 [2024-12-07 01:58:40.182767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.948 [2024-12-07 01:58:40.184846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.948 [2024-12-07 01:58:40.184871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:34.948 spare 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 [2024-12-07 01:58:40.194711] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.948 [2024-12-07 01:58:40.196482] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.948 [2024-12-07 01:58:40.196544] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.948 [2024-12-07 01:58:40.196595] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:34.948 [2024-12-07 01:58:40.196796] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:34.948 [2024-12-07 01:58:40.196809] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:34.948 [2024-12-07 01:58:40.197062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:34.948 [2024-12-07 01:58:40.197533] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:34.948 [2024-12-07 01:58:40.197552] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:34.948 [2024-12-07 01:58:40.197691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.948 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.948 "name": "raid_bdev1", 00:14:34.948 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:34.948 "strip_size_kb": 64, 00:14:34.948 "state": "online", 00:14:34.948 "raid_level": "raid5f", 00:14:34.948 "superblock": true, 00:14:34.948 "num_base_bdevs": 4, 00:14:34.948 "num_base_bdevs_discovered": 4, 00:14:34.948 "num_base_bdevs_operational": 4, 00:14:34.948 "base_bdevs_list": [ 00:14:34.948 { 00:14:34.948 "name": "BaseBdev1", 00:14:34.948 "uuid": "76766ea8-2734-53d5-9ef1-af2e9c3446dc", 00:14:34.948 "is_configured": true, 00:14:34.948 "data_offset": 2048, 00:14:34.948 "data_size": 63488 00:14:34.948 }, 00:14:34.948 { 00:14:34.948 "name": "BaseBdev2", 00:14:34.948 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:34.948 "is_configured": true, 00:14:34.948 "data_offset": 2048, 00:14:34.948 "data_size": 63488 00:14:34.948 }, 00:14:34.948 { 00:14:34.949 "name": "BaseBdev3", 00:14:34.949 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:34.949 "is_configured": true, 00:14:34.949 "data_offset": 2048, 00:14:34.949 "data_size": 63488 00:14:34.949 }, 00:14:34.949 { 00:14:34.949 "name": "BaseBdev4", 00:14:34.949 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:34.949 "is_configured": true, 00:14:34.949 "data_offset": 2048, 00:14:34.949 "data_size": 63488 00:14:34.949 } 00:14:34.949 ] 00:14:34.949 }' 00:14:34.949 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.949 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.208 [2024-12-07 01:58:40.606945] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.208 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.468 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:35.468 [2024-12-07 01:58:40.910220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:35.728 /dev/nbd0 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.728 1+0 records in 00:14:35.728 1+0 records out 00:14:35.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000617924 s, 6.6 MB/s 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:35.728 01:58:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:35.987 496+0 records in 00:14:35.987 496+0 records out 00:14:35.987 97517568 bytes (98 MB, 93 MiB) copied, 0.395941 s, 246 MB/s 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.987 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.246 [2024-12-07 01:58:41.609797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.246 [2024-12-07 01:58:41.625834] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.246 "name": "raid_bdev1", 00:14:36.246 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:36.246 "strip_size_kb": 64, 00:14:36.246 "state": "online", 00:14:36.246 "raid_level": "raid5f", 00:14:36.246 "superblock": true, 00:14:36.246 "num_base_bdevs": 4, 00:14:36.246 "num_base_bdevs_discovered": 3, 00:14:36.246 "num_base_bdevs_operational": 3, 00:14:36.246 "base_bdevs_list": [ 00:14:36.246 { 00:14:36.246 "name": null, 00:14:36.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.246 "is_configured": false, 00:14:36.246 "data_offset": 0, 00:14:36.246 "data_size": 63488 00:14:36.246 }, 00:14:36.246 { 00:14:36.246 "name": "BaseBdev2", 00:14:36.246 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:36.246 "is_configured": true, 00:14:36.246 "data_offset": 2048, 00:14:36.246 "data_size": 63488 00:14:36.246 }, 00:14:36.246 { 00:14:36.246 "name": "BaseBdev3", 00:14:36.246 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:36.246 "is_configured": true, 00:14:36.246 "data_offset": 2048, 00:14:36.246 "data_size": 63488 00:14:36.246 }, 00:14:36.246 { 00:14:36.246 "name": "BaseBdev4", 00:14:36.246 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:36.246 "is_configured": true, 00:14:36.246 "data_offset": 2048, 00:14:36.246 "data_size": 63488 00:14:36.246 } 00:14:36.246 ] 00:14:36.246 }' 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.246 01:58:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.814 01:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:36.814 01:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.814 01:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.814 [2024-12-07 01:58:42.089076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:36.814 [2024-12-07 01:58:42.092394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:14:36.814 [2024-12-07 01:58:42.094635] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.814 01:58:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.814 01:58:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.751 "name": "raid_bdev1", 00:14:37.751 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:37.751 "strip_size_kb": 64, 00:14:37.751 "state": "online", 00:14:37.751 "raid_level": "raid5f", 00:14:37.751 "superblock": true, 00:14:37.751 "num_base_bdevs": 4, 00:14:37.751 "num_base_bdevs_discovered": 4, 00:14:37.751 "num_base_bdevs_operational": 4, 00:14:37.751 "process": { 00:14:37.751 "type": "rebuild", 00:14:37.751 "target": "spare", 00:14:37.751 "progress": { 00:14:37.751 "blocks": 19200, 00:14:37.751 "percent": 10 00:14:37.751 } 00:14:37.751 }, 00:14:37.751 "base_bdevs_list": [ 00:14:37.751 { 00:14:37.751 "name": "spare", 00:14:37.751 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:37.751 "is_configured": true, 00:14:37.751 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 }, 00:14:37.751 { 00:14:37.751 "name": "BaseBdev2", 00:14:37.751 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:37.751 "is_configured": true, 00:14:37.751 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 }, 00:14:37.751 { 00:14:37.751 "name": "BaseBdev3", 00:14:37.751 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:37.751 "is_configured": true, 00:14:37.751 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 }, 00:14:37.751 { 00:14:37.751 "name": "BaseBdev4", 00:14:37.751 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:37.751 "is_configured": true, 00:14:37.751 "data_offset": 2048, 00:14:37.751 "data_size": 63488 00:14:37.751 } 00:14:37.751 ] 00:14:37.751 }' 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.751 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.011 [2024-12-07 01:58:43.249307] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.011 [2024-12-07 01:58:43.300711] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:38.011 [2024-12-07 01:58:43.300765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.011 [2024-12-07 01:58:43.300784] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:38.011 [2024-12-07 01:58:43.300794] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.011 "name": "raid_bdev1", 00:14:38.011 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:38.011 "strip_size_kb": 64, 00:14:38.011 "state": "online", 00:14:38.011 "raid_level": "raid5f", 00:14:38.011 "superblock": true, 00:14:38.011 "num_base_bdevs": 4, 00:14:38.011 "num_base_bdevs_discovered": 3, 00:14:38.011 "num_base_bdevs_operational": 3, 00:14:38.011 "base_bdevs_list": [ 00:14:38.011 { 00:14:38.011 "name": null, 00:14:38.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.011 "is_configured": false, 00:14:38.011 "data_offset": 0, 00:14:38.011 "data_size": 63488 00:14:38.011 }, 00:14:38.011 { 00:14:38.011 "name": "BaseBdev2", 00:14:38.011 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:38.011 "is_configured": true, 00:14:38.011 "data_offset": 2048, 00:14:38.011 "data_size": 63488 00:14:38.011 }, 00:14:38.011 { 00:14:38.011 "name": "BaseBdev3", 00:14:38.011 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:38.011 "is_configured": true, 00:14:38.011 "data_offset": 2048, 00:14:38.011 "data_size": 63488 00:14:38.011 }, 00:14:38.011 { 00:14:38.011 "name": "BaseBdev4", 00:14:38.011 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:38.011 "is_configured": true, 00:14:38.011 "data_offset": 2048, 00:14:38.011 "data_size": 63488 00:14:38.011 } 00:14:38.011 ] 00:14:38.011 }' 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.011 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.270 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.529 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.529 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.529 "name": "raid_bdev1", 00:14:38.529 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:38.529 "strip_size_kb": 64, 00:14:38.529 "state": "online", 00:14:38.529 "raid_level": "raid5f", 00:14:38.529 "superblock": true, 00:14:38.529 "num_base_bdevs": 4, 00:14:38.529 "num_base_bdevs_discovered": 3, 00:14:38.529 "num_base_bdevs_operational": 3, 00:14:38.529 "base_bdevs_list": [ 00:14:38.529 { 00:14:38.529 "name": null, 00:14:38.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.530 "is_configured": false, 00:14:38.530 "data_offset": 0, 00:14:38.530 "data_size": 63488 00:14:38.530 }, 00:14:38.530 { 00:14:38.530 "name": "BaseBdev2", 00:14:38.530 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:38.530 "is_configured": true, 00:14:38.530 "data_offset": 2048, 00:14:38.530 "data_size": 63488 00:14:38.530 }, 00:14:38.530 { 00:14:38.530 "name": "BaseBdev3", 00:14:38.530 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:38.530 "is_configured": true, 00:14:38.530 "data_offset": 2048, 00:14:38.530 "data_size": 63488 00:14:38.530 }, 00:14:38.530 { 00:14:38.530 "name": "BaseBdev4", 00:14:38.530 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:38.530 "is_configured": true, 00:14:38.530 "data_offset": 2048, 00:14:38.530 "data_size": 63488 00:14:38.530 } 00:14:38.530 ] 00:14:38.530 }' 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.530 [2024-12-07 01:58:43.849239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.530 [2024-12-07 01:58:43.852642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:14:38.530 [2024-12-07 01:58:43.854798] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.530 01:58:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.464 "name": "raid_bdev1", 00:14:39.464 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:39.464 "strip_size_kb": 64, 00:14:39.464 "state": "online", 00:14:39.464 "raid_level": "raid5f", 00:14:39.464 "superblock": true, 00:14:39.464 "num_base_bdevs": 4, 00:14:39.464 "num_base_bdevs_discovered": 4, 00:14:39.464 "num_base_bdevs_operational": 4, 00:14:39.464 "process": { 00:14:39.464 "type": "rebuild", 00:14:39.464 "target": "spare", 00:14:39.464 "progress": { 00:14:39.464 "blocks": 19200, 00:14:39.464 "percent": 10 00:14:39.464 } 00:14:39.464 }, 00:14:39.464 "base_bdevs_list": [ 00:14:39.464 { 00:14:39.464 "name": "spare", 00:14:39.464 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:39.464 "is_configured": true, 00:14:39.464 "data_offset": 2048, 00:14:39.464 "data_size": 63488 00:14:39.464 }, 00:14:39.464 { 00:14:39.464 "name": "BaseBdev2", 00:14:39.464 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:39.464 "is_configured": true, 00:14:39.464 "data_offset": 2048, 00:14:39.464 "data_size": 63488 00:14:39.464 }, 00:14:39.464 { 00:14:39.464 "name": "BaseBdev3", 00:14:39.464 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:39.464 "is_configured": true, 00:14:39.464 "data_offset": 2048, 00:14:39.464 "data_size": 63488 00:14:39.464 }, 00:14:39.464 { 00:14:39.464 "name": "BaseBdev4", 00:14:39.464 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:39.464 "is_configured": true, 00:14:39.464 "data_offset": 2048, 00:14:39.464 "data_size": 63488 00:14:39.464 } 00:14:39.464 ] 00:14:39.464 }' 00:14:39.464 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.723 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.723 01:58:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.723 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.723 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:39.723 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:39.724 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.724 "name": "raid_bdev1", 00:14:39.724 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:39.724 "strip_size_kb": 64, 00:14:39.724 "state": "online", 00:14:39.724 "raid_level": "raid5f", 00:14:39.724 "superblock": true, 00:14:39.724 "num_base_bdevs": 4, 00:14:39.724 "num_base_bdevs_discovered": 4, 00:14:39.724 "num_base_bdevs_operational": 4, 00:14:39.724 "process": { 00:14:39.724 "type": "rebuild", 00:14:39.724 "target": "spare", 00:14:39.724 "progress": { 00:14:39.724 "blocks": 21120, 00:14:39.724 "percent": 11 00:14:39.724 } 00:14:39.724 }, 00:14:39.724 "base_bdevs_list": [ 00:14:39.724 { 00:14:39.724 "name": "spare", 00:14:39.724 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:39.724 "is_configured": true, 00:14:39.724 "data_offset": 2048, 00:14:39.724 "data_size": 63488 00:14:39.724 }, 00:14:39.724 { 00:14:39.724 "name": "BaseBdev2", 00:14:39.724 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:39.724 "is_configured": true, 00:14:39.724 "data_offset": 2048, 00:14:39.724 "data_size": 63488 00:14:39.724 }, 00:14:39.724 { 00:14:39.724 "name": "BaseBdev3", 00:14:39.724 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:39.724 "is_configured": true, 00:14:39.724 "data_offset": 2048, 00:14:39.724 "data_size": 63488 00:14:39.724 }, 00:14:39.724 { 00:14:39.724 "name": "BaseBdev4", 00:14:39.724 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:39.724 "is_configured": true, 00:14:39.724 "data_offset": 2048, 00:14:39.724 "data_size": 63488 00:14:39.724 } 00:14:39.724 ] 00:14:39.724 }' 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.724 01:58:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.099 "name": "raid_bdev1", 00:14:41.099 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:41.099 "strip_size_kb": 64, 00:14:41.099 "state": "online", 00:14:41.099 "raid_level": "raid5f", 00:14:41.099 "superblock": true, 00:14:41.099 "num_base_bdevs": 4, 00:14:41.099 "num_base_bdevs_discovered": 4, 00:14:41.099 "num_base_bdevs_operational": 4, 00:14:41.099 "process": { 00:14:41.099 "type": "rebuild", 00:14:41.099 "target": "spare", 00:14:41.099 "progress": { 00:14:41.099 "blocks": 42240, 00:14:41.099 "percent": 22 00:14:41.099 } 00:14:41.099 }, 00:14:41.099 "base_bdevs_list": [ 00:14:41.099 { 00:14:41.099 "name": "spare", 00:14:41.099 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:41.099 "is_configured": true, 00:14:41.099 "data_offset": 2048, 00:14:41.099 "data_size": 63488 00:14:41.099 }, 00:14:41.099 { 00:14:41.099 "name": "BaseBdev2", 00:14:41.099 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:41.099 "is_configured": true, 00:14:41.099 "data_offset": 2048, 00:14:41.099 "data_size": 63488 00:14:41.099 }, 00:14:41.099 { 00:14:41.099 "name": "BaseBdev3", 00:14:41.099 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:41.099 "is_configured": true, 00:14:41.099 "data_offset": 2048, 00:14:41.099 "data_size": 63488 00:14:41.099 }, 00:14:41.099 { 00:14:41.099 "name": "BaseBdev4", 00:14:41.099 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:41.099 "is_configured": true, 00:14:41.099 "data_offset": 2048, 00:14:41.099 "data_size": 63488 00:14:41.099 } 00:14:41.099 ] 00:14:41.099 }' 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.099 01:58:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.033 "name": "raid_bdev1", 00:14:42.033 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:42.033 "strip_size_kb": 64, 00:14:42.033 "state": "online", 00:14:42.033 "raid_level": "raid5f", 00:14:42.033 "superblock": true, 00:14:42.033 "num_base_bdevs": 4, 00:14:42.033 "num_base_bdevs_discovered": 4, 00:14:42.033 "num_base_bdevs_operational": 4, 00:14:42.033 "process": { 00:14:42.033 "type": "rebuild", 00:14:42.033 "target": "spare", 00:14:42.033 "progress": { 00:14:42.033 "blocks": 65280, 00:14:42.033 "percent": 34 00:14:42.033 } 00:14:42.033 }, 00:14:42.033 "base_bdevs_list": [ 00:14:42.033 { 00:14:42.033 "name": "spare", 00:14:42.033 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:42.033 "is_configured": true, 00:14:42.033 "data_offset": 2048, 00:14:42.033 "data_size": 63488 00:14:42.033 }, 00:14:42.033 { 00:14:42.033 "name": "BaseBdev2", 00:14:42.033 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:42.033 "is_configured": true, 00:14:42.033 "data_offset": 2048, 00:14:42.033 "data_size": 63488 00:14:42.033 }, 00:14:42.033 { 00:14:42.033 "name": "BaseBdev3", 00:14:42.033 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:42.033 "is_configured": true, 00:14:42.033 "data_offset": 2048, 00:14:42.033 "data_size": 63488 00:14:42.033 }, 00:14:42.033 { 00:14:42.033 "name": "BaseBdev4", 00:14:42.033 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:42.033 "is_configured": true, 00:14:42.033 "data_offset": 2048, 00:14:42.033 "data_size": 63488 00:14:42.033 } 00:14:42.033 ] 00:14:42.033 }' 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.033 01:58:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.968 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.968 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.968 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.968 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.968 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.968 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.227 "name": "raid_bdev1", 00:14:43.227 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:43.227 "strip_size_kb": 64, 00:14:43.227 "state": "online", 00:14:43.227 "raid_level": "raid5f", 00:14:43.227 "superblock": true, 00:14:43.227 "num_base_bdevs": 4, 00:14:43.227 "num_base_bdevs_discovered": 4, 00:14:43.227 "num_base_bdevs_operational": 4, 00:14:43.227 "process": { 00:14:43.227 "type": "rebuild", 00:14:43.227 "target": "spare", 00:14:43.227 "progress": { 00:14:43.227 "blocks": 86400, 00:14:43.227 "percent": 45 00:14:43.227 } 00:14:43.227 }, 00:14:43.227 "base_bdevs_list": [ 00:14:43.227 { 00:14:43.227 "name": "spare", 00:14:43.227 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:43.227 "is_configured": true, 00:14:43.227 "data_offset": 2048, 00:14:43.227 "data_size": 63488 00:14:43.227 }, 00:14:43.227 { 00:14:43.227 "name": "BaseBdev2", 00:14:43.227 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:43.227 "is_configured": true, 00:14:43.227 "data_offset": 2048, 00:14:43.227 "data_size": 63488 00:14:43.227 }, 00:14:43.227 { 00:14:43.227 "name": "BaseBdev3", 00:14:43.227 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:43.227 "is_configured": true, 00:14:43.227 "data_offset": 2048, 00:14:43.227 "data_size": 63488 00:14:43.227 }, 00:14:43.227 { 00:14:43.227 "name": "BaseBdev4", 00:14:43.227 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:43.227 "is_configured": true, 00:14:43.227 "data_offset": 2048, 00:14:43.227 "data_size": 63488 00:14:43.227 } 00:14:43.227 ] 00:14:43.227 }' 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.227 01:58:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.207 "name": "raid_bdev1", 00:14:44.207 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:44.207 "strip_size_kb": 64, 00:14:44.207 "state": "online", 00:14:44.207 "raid_level": "raid5f", 00:14:44.207 "superblock": true, 00:14:44.207 "num_base_bdevs": 4, 00:14:44.207 "num_base_bdevs_discovered": 4, 00:14:44.207 "num_base_bdevs_operational": 4, 00:14:44.207 "process": { 00:14:44.207 "type": "rebuild", 00:14:44.207 "target": "spare", 00:14:44.207 "progress": { 00:14:44.207 "blocks": 109440, 00:14:44.207 "percent": 57 00:14:44.207 } 00:14:44.207 }, 00:14:44.207 "base_bdevs_list": [ 00:14:44.207 { 00:14:44.207 "name": "spare", 00:14:44.207 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:44.207 "is_configured": true, 00:14:44.207 "data_offset": 2048, 00:14:44.207 "data_size": 63488 00:14:44.207 }, 00:14:44.207 { 00:14:44.207 "name": "BaseBdev2", 00:14:44.207 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:44.207 "is_configured": true, 00:14:44.207 "data_offset": 2048, 00:14:44.207 "data_size": 63488 00:14:44.207 }, 00:14:44.207 { 00:14:44.207 "name": "BaseBdev3", 00:14:44.207 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:44.207 "is_configured": true, 00:14:44.207 "data_offset": 2048, 00:14:44.207 "data_size": 63488 00:14:44.207 }, 00:14:44.207 { 00:14:44.207 "name": "BaseBdev4", 00:14:44.207 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:44.207 "is_configured": true, 00:14:44.207 "data_offset": 2048, 00:14:44.207 "data_size": 63488 00:14:44.207 } 00:14:44.207 ] 00:14:44.207 }' 00:14:44.207 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.464 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.464 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.464 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.464 01:58:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.397 "name": "raid_bdev1", 00:14:45.397 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:45.397 "strip_size_kb": 64, 00:14:45.397 "state": "online", 00:14:45.397 "raid_level": "raid5f", 00:14:45.397 "superblock": true, 00:14:45.397 "num_base_bdevs": 4, 00:14:45.397 "num_base_bdevs_discovered": 4, 00:14:45.397 "num_base_bdevs_operational": 4, 00:14:45.397 "process": { 00:14:45.397 "type": "rebuild", 00:14:45.397 "target": "spare", 00:14:45.397 "progress": { 00:14:45.397 "blocks": 130560, 00:14:45.397 "percent": 68 00:14:45.397 } 00:14:45.397 }, 00:14:45.397 "base_bdevs_list": [ 00:14:45.397 { 00:14:45.397 "name": "spare", 00:14:45.397 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:45.397 "is_configured": true, 00:14:45.397 "data_offset": 2048, 00:14:45.397 "data_size": 63488 00:14:45.397 }, 00:14:45.397 { 00:14:45.397 "name": "BaseBdev2", 00:14:45.397 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:45.397 "is_configured": true, 00:14:45.397 "data_offset": 2048, 00:14:45.397 "data_size": 63488 00:14:45.397 }, 00:14:45.397 { 00:14:45.397 "name": "BaseBdev3", 00:14:45.397 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:45.397 "is_configured": true, 00:14:45.397 "data_offset": 2048, 00:14:45.397 "data_size": 63488 00:14:45.397 }, 00:14:45.397 { 00:14:45.397 "name": "BaseBdev4", 00:14:45.397 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:45.397 "is_configured": true, 00:14:45.397 "data_offset": 2048, 00:14:45.397 "data_size": 63488 00:14:45.397 } 00:14:45.397 ] 00:14:45.397 }' 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.397 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.656 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.656 01:58:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.589 "name": "raid_bdev1", 00:14:46.589 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:46.589 "strip_size_kb": 64, 00:14:46.589 "state": "online", 00:14:46.589 "raid_level": "raid5f", 00:14:46.589 "superblock": true, 00:14:46.589 "num_base_bdevs": 4, 00:14:46.589 "num_base_bdevs_discovered": 4, 00:14:46.589 "num_base_bdevs_operational": 4, 00:14:46.589 "process": { 00:14:46.589 "type": "rebuild", 00:14:46.589 "target": "spare", 00:14:46.589 "progress": { 00:14:46.589 "blocks": 153600, 00:14:46.589 "percent": 80 00:14:46.589 } 00:14:46.589 }, 00:14:46.589 "base_bdevs_list": [ 00:14:46.589 { 00:14:46.589 "name": "spare", 00:14:46.589 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:46.589 "is_configured": true, 00:14:46.589 "data_offset": 2048, 00:14:46.589 "data_size": 63488 00:14:46.589 }, 00:14:46.589 { 00:14:46.589 "name": "BaseBdev2", 00:14:46.589 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:46.589 "is_configured": true, 00:14:46.589 "data_offset": 2048, 00:14:46.589 "data_size": 63488 00:14:46.589 }, 00:14:46.589 { 00:14:46.589 "name": "BaseBdev3", 00:14:46.589 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:46.589 "is_configured": true, 00:14:46.589 "data_offset": 2048, 00:14:46.589 "data_size": 63488 00:14:46.589 }, 00:14:46.589 { 00:14:46.589 "name": "BaseBdev4", 00:14:46.589 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:46.589 "is_configured": true, 00:14:46.589 "data_offset": 2048, 00:14:46.589 "data_size": 63488 00:14:46.589 } 00:14:46.589 ] 00:14:46.589 }' 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.589 01:58:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.589 01:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.589 01:58:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.963 "name": "raid_bdev1", 00:14:47.963 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:47.963 "strip_size_kb": 64, 00:14:47.963 "state": "online", 00:14:47.963 "raid_level": "raid5f", 00:14:47.963 "superblock": true, 00:14:47.963 "num_base_bdevs": 4, 00:14:47.963 "num_base_bdevs_discovered": 4, 00:14:47.963 "num_base_bdevs_operational": 4, 00:14:47.963 "process": { 00:14:47.963 "type": "rebuild", 00:14:47.963 "target": "spare", 00:14:47.963 "progress": { 00:14:47.963 "blocks": 174720, 00:14:47.963 "percent": 91 00:14:47.963 } 00:14:47.963 }, 00:14:47.963 "base_bdevs_list": [ 00:14:47.963 { 00:14:47.963 "name": "spare", 00:14:47.963 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:47.963 "is_configured": true, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 }, 00:14:47.963 { 00:14:47.963 "name": "BaseBdev2", 00:14:47.963 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:47.963 "is_configured": true, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 }, 00:14:47.963 { 00:14:47.963 "name": "BaseBdev3", 00:14:47.963 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:47.963 "is_configured": true, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 }, 00:14:47.963 { 00:14:47.963 "name": "BaseBdev4", 00:14:47.963 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:47.963 "is_configured": true, 00:14:47.963 "data_offset": 2048, 00:14:47.963 "data_size": 63488 00:14:47.963 } 00:14:47.963 ] 00:14:47.963 }' 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.963 01:58:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.539 [2024-12-07 01:58:53.896888] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:48.539 [2024-12-07 01:58:53.897034] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:48.539 [2024-12-07 01:58:53.897209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.797 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.056 "name": "raid_bdev1", 00:14:49.056 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:49.056 "strip_size_kb": 64, 00:14:49.056 "state": "online", 00:14:49.056 "raid_level": "raid5f", 00:14:49.056 "superblock": true, 00:14:49.056 "num_base_bdevs": 4, 00:14:49.056 "num_base_bdevs_discovered": 4, 00:14:49.056 "num_base_bdevs_operational": 4, 00:14:49.056 "base_bdevs_list": [ 00:14:49.056 { 00:14:49.056 "name": "spare", 00:14:49.056 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 }, 00:14:49.056 { 00:14:49.056 "name": "BaseBdev2", 00:14:49.056 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 }, 00:14:49.056 { 00:14:49.056 "name": "BaseBdev3", 00:14:49.056 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 }, 00:14:49.056 { 00:14:49.056 "name": "BaseBdev4", 00:14:49.056 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 } 00:14:49.056 ] 00:14:49.056 }' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.056 "name": "raid_bdev1", 00:14:49.056 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:49.056 "strip_size_kb": 64, 00:14:49.056 "state": "online", 00:14:49.056 "raid_level": "raid5f", 00:14:49.056 "superblock": true, 00:14:49.056 "num_base_bdevs": 4, 00:14:49.056 "num_base_bdevs_discovered": 4, 00:14:49.056 "num_base_bdevs_operational": 4, 00:14:49.056 "base_bdevs_list": [ 00:14:49.056 { 00:14:49.056 "name": "spare", 00:14:49.056 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 }, 00:14:49.056 { 00:14:49.056 "name": "BaseBdev2", 00:14:49.056 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 }, 00:14:49.056 { 00:14:49.056 "name": "BaseBdev3", 00:14:49.056 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 }, 00:14:49.056 { 00:14:49.056 "name": "BaseBdev4", 00:14:49.056 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:49.056 "is_configured": true, 00:14:49.056 "data_offset": 2048, 00:14:49.056 "data_size": 63488 00:14:49.056 } 00:14:49.056 ] 00:14:49.056 }' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:49.056 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.057 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.315 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.315 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.315 "name": "raid_bdev1", 00:14:49.315 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:49.315 "strip_size_kb": 64, 00:14:49.315 "state": "online", 00:14:49.315 "raid_level": "raid5f", 00:14:49.315 "superblock": true, 00:14:49.315 "num_base_bdevs": 4, 00:14:49.315 "num_base_bdevs_discovered": 4, 00:14:49.315 "num_base_bdevs_operational": 4, 00:14:49.315 "base_bdevs_list": [ 00:14:49.315 { 00:14:49.315 "name": "spare", 00:14:49.315 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:49.315 "is_configured": true, 00:14:49.315 "data_offset": 2048, 00:14:49.315 "data_size": 63488 00:14:49.315 }, 00:14:49.315 { 00:14:49.315 "name": "BaseBdev2", 00:14:49.315 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:49.315 "is_configured": true, 00:14:49.315 "data_offset": 2048, 00:14:49.315 "data_size": 63488 00:14:49.315 }, 00:14:49.315 { 00:14:49.315 "name": "BaseBdev3", 00:14:49.315 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:49.315 "is_configured": true, 00:14:49.315 "data_offset": 2048, 00:14:49.315 "data_size": 63488 00:14:49.315 }, 00:14:49.315 { 00:14:49.315 "name": "BaseBdev4", 00:14:49.315 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:49.315 "is_configured": true, 00:14:49.315 "data_offset": 2048, 00:14:49.315 "data_size": 63488 00:14:49.315 } 00:14:49.315 ] 00:14:49.315 }' 00:14:49.315 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.315 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.575 [2024-12-07 01:58:54.940570] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:49.575 [2024-12-07 01:58:54.940652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:49.575 [2024-12-07 01:58:54.940785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:49.575 [2024-12-07 01:58:54.940903] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:49.575 [2024-12-07 01:58:54.940961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.575 01:58:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:49.833 /dev/nbd0 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:49.833 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:49.834 1+0 records in 00:14:49.834 1+0 records out 00:14:49.834 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493072 s, 8.3 MB/s 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:49.834 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:50.092 /dev/nbd1 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:50.092 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.093 1+0 records in 00:14:50.093 1+0 records out 00:14:50.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392957 s, 10.4 MB/s 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.093 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:50.351 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.609 01:58:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.609 [2024-12-07 01:58:56.059137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:50.609 [2024-12-07 01:58:56.059196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.609 [2024-12-07 01:58:56.059234] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:50.609 [2024-12-07 01:58:56.059245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.609 [2024-12-07 01:58:56.061465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.609 [2024-12-07 01:58:56.061555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:50.609 [2024-12-07 01:58:56.061649] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:50.609 [2024-12-07 01:58:56.061701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.609 [2024-12-07 01:58:56.061811] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.609 [2024-12-07 01:58:56.061922] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.609 [2024-12-07 01:58:56.061989] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.609 spare 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.609 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.867 [2024-12-07 01:58:56.161896] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:50.867 [2024-12-07 01:58:56.161925] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:50.867 [2024-12-07 01:58:56.162181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:14:50.867 [2024-12-07 01:58:56.162597] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:50.867 [2024-12-07 01:58:56.162610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:50.867 [2024-12-07 01:58:56.162751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.867 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.868 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.868 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.868 "name": "raid_bdev1", 00:14:50.868 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:50.868 "strip_size_kb": 64, 00:14:50.868 "state": "online", 00:14:50.868 "raid_level": "raid5f", 00:14:50.868 "superblock": true, 00:14:50.868 "num_base_bdevs": 4, 00:14:50.868 "num_base_bdevs_discovered": 4, 00:14:50.868 "num_base_bdevs_operational": 4, 00:14:50.868 "base_bdevs_list": [ 00:14:50.868 { 00:14:50.868 "name": "spare", 00:14:50.868 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:50.868 "is_configured": true, 00:14:50.868 "data_offset": 2048, 00:14:50.868 "data_size": 63488 00:14:50.868 }, 00:14:50.868 { 00:14:50.868 "name": "BaseBdev2", 00:14:50.868 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:50.868 "is_configured": true, 00:14:50.868 "data_offset": 2048, 00:14:50.868 "data_size": 63488 00:14:50.868 }, 00:14:50.868 { 00:14:50.868 "name": "BaseBdev3", 00:14:50.868 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:50.868 "is_configured": true, 00:14:50.868 "data_offset": 2048, 00:14:50.868 "data_size": 63488 00:14:50.868 }, 00:14:50.868 { 00:14:50.868 "name": "BaseBdev4", 00:14:50.868 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:50.868 "is_configured": true, 00:14:50.868 "data_offset": 2048, 00:14:50.868 "data_size": 63488 00:14:50.868 } 00:14:50.868 ] 00:14:50.868 }' 00:14:50.868 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.868 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.126 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.126 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.126 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.126 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.126 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.126 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.385 "name": "raid_bdev1", 00:14:51.385 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:51.385 "strip_size_kb": 64, 00:14:51.385 "state": "online", 00:14:51.385 "raid_level": "raid5f", 00:14:51.385 "superblock": true, 00:14:51.385 "num_base_bdevs": 4, 00:14:51.385 "num_base_bdevs_discovered": 4, 00:14:51.385 "num_base_bdevs_operational": 4, 00:14:51.385 "base_bdevs_list": [ 00:14:51.385 { 00:14:51.385 "name": "spare", 00:14:51.385 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 2048, 00:14:51.385 "data_size": 63488 00:14:51.385 }, 00:14:51.385 { 00:14:51.385 "name": "BaseBdev2", 00:14:51.385 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 2048, 00:14:51.385 "data_size": 63488 00:14:51.385 }, 00:14:51.385 { 00:14:51.385 "name": "BaseBdev3", 00:14:51.385 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 2048, 00:14:51.385 "data_size": 63488 00:14:51.385 }, 00:14:51.385 { 00:14:51.385 "name": "BaseBdev4", 00:14:51.385 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:51.385 "is_configured": true, 00:14:51.385 "data_offset": 2048, 00:14:51.385 "data_size": 63488 00:14:51.385 } 00:14:51.385 ] 00:14:51.385 }' 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.385 [2024-12-07 01:58:56.767470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.385 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.386 "name": "raid_bdev1", 00:14:51.386 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:51.386 "strip_size_kb": 64, 00:14:51.386 "state": "online", 00:14:51.386 "raid_level": "raid5f", 00:14:51.386 "superblock": true, 00:14:51.386 "num_base_bdevs": 4, 00:14:51.386 "num_base_bdevs_discovered": 3, 00:14:51.386 "num_base_bdevs_operational": 3, 00:14:51.386 "base_bdevs_list": [ 00:14:51.386 { 00:14:51.386 "name": null, 00:14:51.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.386 "is_configured": false, 00:14:51.386 "data_offset": 0, 00:14:51.386 "data_size": 63488 00:14:51.386 }, 00:14:51.386 { 00:14:51.386 "name": "BaseBdev2", 00:14:51.386 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:51.386 "is_configured": true, 00:14:51.386 "data_offset": 2048, 00:14:51.386 "data_size": 63488 00:14:51.386 }, 00:14:51.386 { 00:14:51.386 "name": "BaseBdev3", 00:14:51.386 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:51.386 "is_configured": true, 00:14:51.386 "data_offset": 2048, 00:14:51.386 "data_size": 63488 00:14:51.386 }, 00:14:51.386 { 00:14:51.386 "name": "BaseBdev4", 00:14:51.386 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:51.386 "is_configured": true, 00:14:51.386 "data_offset": 2048, 00:14:51.386 "data_size": 63488 00:14:51.386 } 00:14:51.386 ] 00:14:51.386 }' 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.386 01:58:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.953 01:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.953 01:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.953 01:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.953 [2024-12-07 01:58:57.214758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.953 [2024-12-07 01:58:57.215009] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:51.953 [2024-12-07 01:58:57.215080] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:51.953 [2024-12-07 01:58:57.215160] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.953 [2024-12-07 01:58:57.218375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:14:51.953 [2024-12-07 01:58:57.220634] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.953 01:58:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.953 01:58:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.891 "name": "raid_bdev1", 00:14:52.891 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:52.891 "strip_size_kb": 64, 00:14:52.891 "state": "online", 00:14:52.891 "raid_level": "raid5f", 00:14:52.891 "superblock": true, 00:14:52.891 "num_base_bdevs": 4, 00:14:52.891 "num_base_bdevs_discovered": 4, 00:14:52.891 "num_base_bdevs_operational": 4, 00:14:52.891 "process": { 00:14:52.891 "type": "rebuild", 00:14:52.891 "target": "spare", 00:14:52.891 "progress": { 00:14:52.891 "blocks": 19200, 00:14:52.891 "percent": 10 00:14:52.891 } 00:14:52.891 }, 00:14:52.891 "base_bdevs_list": [ 00:14:52.891 { 00:14:52.891 "name": "spare", 00:14:52.891 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:52.891 "is_configured": true, 00:14:52.891 "data_offset": 2048, 00:14:52.891 "data_size": 63488 00:14:52.891 }, 00:14:52.891 { 00:14:52.891 "name": "BaseBdev2", 00:14:52.891 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:52.891 "is_configured": true, 00:14:52.891 "data_offset": 2048, 00:14:52.891 "data_size": 63488 00:14:52.891 }, 00:14:52.891 { 00:14:52.891 "name": "BaseBdev3", 00:14:52.891 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:52.891 "is_configured": true, 00:14:52.891 "data_offset": 2048, 00:14:52.891 "data_size": 63488 00:14:52.891 }, 00:14:52.891 { 00:14:52.891 "name": "BaseBdev4", 00:14:52.891 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:52.891 "is_configured": true, 00:14:52.891 "data_offset": 2048, 00:14:52.891 "data_size": 63488 00:14:52.891 } 00:14:52.891 ] 00:14:52.891 }' 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.891 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.151 [2024-12-07 01:58:58.383924] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.151 [2024-12-07 01:58:58.426025] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.151 [2024-12-07 01:58:58.426094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.151 [2024-12-07 01:58:58.426114] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.151 [2024-12-07 01:58:58.426121] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.151 "name": "raid_bdev1", 00:14:53.151 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:53.151 "strip_size_kb": 64, 00:14:53.151 "state": "online", 00:14:53.151 "raid_level": "raid5f", 00:14:53.151 "superblock": true, 00:14:53.151 "num_base_bdevs": 4, 00:14:53.151 "num_base_bdevs_discovered": 3, 00:14:53.151 "num_base_bdevs_operational": 3, 00:14:53.151 "base_bdevs_list": [ 00:14:53.151 { 00:14:53.151 "name": null, 00:14:53.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.151 "is_configured": false, 00:14:53.151 "data_offset": 0, 00:14:53.151 "data_size": 63488 00:14:53.151 }, 00:14:53.151 { 00:14:53.151 "name": "BaseBdev2", 00:14:53.151 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:53.151 "is_configured": true, 00:14:53.151 "data_offset": 2048, 00:14:53.151 "data_size": 63488 00:14:53.151 }, 00:14:53.151 { 00:14:53.151 "name": "BaseBdev3", 00:14:53.151 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:53.151 "is_configured": true, 00:14:53.151 "data_offset": 2048, 00:14:53.151 "data_size": 63488 00:14:53.151 }, 00:14:53.151 { 00:14:53.151 "name": "BaseBdev4", 00:14:53.151 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:53.151 "is_configured": true, 00:14:53.151 "data_offset": 2048, 00:14:53.151 "data_size": 63488 00:14:53.151 } 00:14:53.151 ] 00:14:53.151 }' 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.151 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.410 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:53.410 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.410 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.410 [2024-12-07 01:58:58.866304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:53.410 [2024-12-07 01:58:58.866363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:53.410 [2024-12-07 01:58:58.866392] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:53.410 [2024-12-07 01:58:58.866401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:53.410 [2024-12-07 01:58:58.866881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:53.410 [2024-12-07 01:58:58.866909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:53.410 [2024-12-07 01:58:58.867000] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:53.410 [2024-12-07 01:58:58.867013] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:53.410 [2024-12-07 01:58:58.867029] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:53.410 [2024-12-07 01:58:58.867057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.410 [2024-12-07 01:58:58.870311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:14:53.410 spare 00:14:53.668 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.668 01:58:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:53.668 [2024-12-07 01:58:58.872660] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.610 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.610 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.610 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.610 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.610 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.611 "name": "raid_bdev1", 00:14:54.611 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:54.611 "strip_size_kb": 64, 00:14:54.611 "state": "online", 00:14:54.611 "raid_level": "raid5f", 00:14:54.611 "superblock": true, 00:14:54.611 "num_base_bdevs": 4, 00:14:54.611 "num_base_bdevs_discovered": 4, 00:14:54.611 "num_base_bdevs_operational": 4, 00:14:54.611 "process": { 00:14:54.611 "type": "rebuild", 00:14:54.611 "target": "spare", 00:14:54.611 "progress": { 00:14:54.611 "blocks": 19200, 00:14:54.611 "percent": 10 00:14:54.611 } 00:14:54.611 }, 00:14:54.611 "base_bdevs_list": [ 00:14:54.611 { 00:14:54.611 "name": "spare", 00:14:54.611 "uuid": "38223427-189e-580f-afd7-606398fc1a36", 00:14:54.611 "is_configured": true, 00:14:54.611 "data_offset": 2048, 00:14:54.611 "data_size": 63488 00:14:54.611 }, 00:14:54.611 { 00:14:54.611 "name": "BaseBdev2", 00:14:54.611 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:54.611 "is_configured": true, 00:14:54.611 "data_offset": 2048, 00:14:54.611 "data_size": 63488 00:14:54.611 }, 00:14:54.611 { 00:14:54.611 "name": "BaseBdev3", 00:14:54.611 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:54.611 "is_configured": true, 00:14:54.611 "data_offset": 2048, 00:14:54.611 "data_size": 63488 00:14:54.611 }, 00:14:54.611 { 00:14:54.611 "name": "BaseBdev4", 00:14:54.611 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:54.611 "is_configured": true, 00:14:54.611 "data_offset": 2048, 00:14:54.611 "data_size": 63488 00:14:54.611 } 00:14:54.611 ] 00:14:54.611 }' 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.611 01:58:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.611 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.611 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:54.611 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.611 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.611 [2024-12-07 01:59:00.033059] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.875 [2024-12-07 01:59:00.078291] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:54.875 [2024-12-07 01:59:00.078395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.875 [2024-12-07 01:59:00.078414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.875 [2024-12-07 01:59:00.078425] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.875 "name": "raid_bdev1", 00:14:54.875 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:54.875 "strip_size_kb": 64, 00:14:54.875 "state": "online", 00:14:54.875 "raid_level": "raid5f", 00:14:54.875 "superblock": true, 00:14:54.875 "num_base_bdevs": 4, 00:14:54.875 "num_base_bdevs_discovered": 3, 00:14:54.875 "num_base_bdevs_operational": 3, 00:14:54.875 "base_bdevs_list": [ 00:14:54.875 { 00:14:54.875 "name": null, 00:14:54.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.875 "is_configured": false, 00:14:54.875 "data_offset": 0, 00:14:54.875 "data_size": 63488 00:14:54.875 }, 00:14:54.875 { 00:14:54.875 "name": "BaseBdev2", 00:14:54.875 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:54.875 "is_configured": true, 00:14:54.875 "data_offset": 2048, 00:14:54.875 "data_size": 63488 00:14:54.875 }, 00:14:54.875 { 00:14:54.875 "name": "BaseBdev3", 00:14:54.875 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:54.875 "is_configured": true, 00:14:54.875 "data_offset": 2048, 00:14:54.875 "data_size": 63488 00:14:54.875 }, 00:14:54.875 { 00:14:54.875 "name": "BaseBdev4", 00:14:54.875 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:54.875 "is_configured": true, 00:14:54.875 "data_offset": 2048, 00:14:54.875 "data_size": 63488 00:14:54.875 } 00:14:54.875 ] 00:14:54.875 }' 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.875 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.135 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.135 "name": "raid_bdev1", 00:14:55.135 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:55.135 "strip_size_kb": 64, 00:14:55.135 "state": "online", 00:14:55.135 "raid_level": "raid5f", 00:14:55.135 "superblock": true, 00:14:55.135 "num_base_bdevs": 4, 00:14:55.135 "num_base_bdevs_discovered": 3, 00:14:55.135 "num_base_bdevs_operational": 3, 00:14:55.135 "base_bdevs_list": [ 00:14:55.135 { 00:14:55.135 "name": null, 00:14:55.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.135 "is_configured": false, 00:14:55.135 "data_offset": 0, 00:14:55.135 "data_size": 63488 00:14:55.135 }, 00:14:55.135 { 00:14:55.135 "name": "BaseBdev2", 00:14:55.135 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:55.135 "is_configured": true, 00:14:55.136 "data_offset": 2048, 00:14:55.136 "data_size": 63488 00:14:55.136 }, 00:14:55.136 { 00:14:55.136 "name": "BaseBdev3", 00:14:55.136 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:55.136 "is_configured": true, 00:14:55.136 "data_offset": 2048, 00:14:55.136 "data_size": 63488 00:14:55.136 }, 00:14:55.136 { 00:14:55.136 "name": "BaseBdev4", 00:14:55.136 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:55.136 "is_configured": true, 00:14:55.136 "data_offset": 2048, 00:14:55.136 "data_size": 63488 00:14:55.136 } 00:14:55.136 ] 00:14:55.136 }' 00:14:55.136 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.394 [2024-12-07 01:59:00.662492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:55.394 [2024-12-07 01:59:00.662546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.394 [2024-12-07 01:59:00.662582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:55.394 [2024-12-07 01:59:00.662593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.394 [2024-12-07 01:59:00.663006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.394 [2024-12-07 01:59:00.663024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:55.394 [2024-12-07 01:59:00.663089] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:55.394 [2024-12-07 01:59:00.663108] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:55.394 [2024-12-07 01:59:00.663115] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:55.394 [2024-12-07 01:59:00.663136] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:55.394 BaseBdev1 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.394 01:59:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.333 "name": "raid_bdev1", 00:14:56.333 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:56.333 "strip_size_kb": 64, 00:14:56.333 "state": "online", 00:14:56.333 "raid_level": "raid5f", 00:14:56.333 "superblock": true, 00:14:56.333 "num_base_bdevs": 4, 00:14:56.333 "num_base_bdevs_discovered": 3, 00:14:56.333 "num_base_bdevs_operational": 3, 00:14:56.333 "base_bdevs_list": [ 00:14:56.333 { 00:14:56.333 "name": null, 00:14:56.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.333 "is_configured": false, 00:14:56.333 "data_offset": 0, 00:14:56.333 "data_size": 63488 00:14:56.333 }, 00:14:56.333 { 00:14:56.333 "name": "BaseBdev2", 00:14:56.333 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:56.333 "is_configured": true, 00:14:56.333 "data_offset": 2048, 00:14:56.333 "data_size": 63488 00:14:56.333 }, 00:14:56.333 { 00:14:56.333 "name": "BaseBdev3", 00:14:56.333 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:56.333 "is_configured": true, 00:14:56.333 "data_offset": 2048, 00:14:56.333 "data_size": 63488 00:14:56.333 }, 00:14:56.333 { 00:14:56.333 "name": "BaseBdev4", 00:14:56.333 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:56.333 "is_configured": true, 00:14:56.333 "data_offset": 2048, 00:14:56.333 "data_size": 63488 00:14:56.333 } 00:14:56.333 ] 00:14:56.333 }' 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.333 01:59:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.901 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.902 "name": "raid_bdev1", 00:14:56.902 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:56.902 "strip_size_kb": 64, 00:14:56.902 "state": "online", 00:14:56.902 "raid_level": "raid5f", 00:14:56.902 "superblock": true, 00:14:56.902 "num_base_bdevs": 4, 00:14:56.902 "num_base_bdevs_discovered": 3, 00:14:56.902 "num_base_bdevs_operational": 3, 00:14:56.902 "base_bdevs_list": [ 00:14:56.902 { 00:14:56.902 "name": null, 00:14:56.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.902 "is_configured": false, 00:14:56.902 "data_offset": 0, 00:14:56.902 "data_size": 63488 00:14:56.902 }, 00:14:56.902 { 00:14:56.902 "name": "BaseBdev2", 00:14:56.902 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:56.902 "is_configured": true, 00:14:56.902 "data_offset": 2048, 00:14:56.902 "data_size": 63488 00:14:56.902 }, 00:14:56.902 { 00:14:56.902 "name": "BaseBdev3", 00:14:56.902 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:56.902 "is_configured": true, 00:14:56.902 "data_offset": 2048, 00:14:56.902 "data_size": 63488 00:14:56.902 }, 00:14:56.902 { 00:14:56.902 "name": "BaseBdev4", 00:14:56.902 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:56.902 "is_configured": true, 00:14:56.902 "data_offset": 2048, 00:14:56.902 "data_size": 63488 00:14:56.902 } 00:14:56.902 ] 00:14:56.902 }' 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.902 [2024-12-07 01:59:02.235826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.902 [2024-12-07 01:59:02.236027] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:56.902 [2024-12-07 01:59:02.236044] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:56.902 request: 00:14:56.902 { 00:14:56.902 "base_bdev": "BaseBdev1", 00:14:56.902 "raid_bdev": "raid_bdev1", 00:14:56.902 "method": "bdev_raid_add_base_bdev", 00:14:56.902 "req_id": 1 00:14:56.902 } 00:14:56.902 Got JSON-RPC error response 00:14:56.902 response: 00:14:56.902 { 00:14:56.902 "code": -22, 00:14:56.902 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:56.902 } 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.902 01:59:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.838 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.839 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.839 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.098 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.098 "name": "raid_bdev1", 00:14:58.098 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:58.098 "strip_size_kb": 64, 00:14:58.098 "state": "online", 00:14:58.098 "raid_level": "raid5f", 00:14:58.098 "superblock": true, 00:14:58.098 "num_base_bdevs": 4, 00:14:58.098 "num_base_bdevs_discovered": 3, 00:14:58.098 "num_base_bdevs_operational": 3, 00:14:58.098 "base_bdevs_list": [ 00:14:58.098 { 00:14:58.098 "name": null, 00:14:58.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.098 "is_configured": false, 00:14:58.098 "data_offset": 0, 00:14:58.098 "data_size": 63488 00:14:58.098 }, 00:14:58.098 { 00:14:58.098 "name": "BaseBdev2", 00:14:58.098 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:58.098 "is_configured": true, 00:14:58.098 "data_offset": 2048, 00:14:58.098 "data_size": 63488 00:14:58.098 }, 00:14:58.098 { 00:14:58.098 "name": "BaseBdev3", 00:14:58.098 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:58.098 "is_configured": true, 00:14:58.098 "data_offset": 2048, 00:14:58.098 "data_size": 63488 00:14:58.098 }, 00:14:58.098 { 00:14:58.098 "name": "BaseBdev4", 00:14:58.098 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:58.098 "is_configured": true, 00:14:58.098 "data_offset": 2048, 00:14:58.098 "data_size": 63488 00:14:58.098 } 00:14:58.098 ] 00:14:58.098 }' 00:14:58.098 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.098 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.358 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.358 "name": "raid_bdev1", 00:14:58.358 "uuid": "2f4213c5-eb38-43a0-b6ca-eba2a9000fa1", 00:14:58.358 "strip_size_kb": 64, 00:14:58.358 "state": "online", 00:14:58.358 "raid_level": "raid5f", 00:14:58.358 "superblock": true, 00:14:58.358 "num_base_bdevs": 4, 00:14:58.358 "num_base_bdevs_discovered": 3, 00:14:58.358 "num_base_bdevs_operational": 3, 00:14:58.358 "base_bdevs_list": [ 00:14:58.358 { 00:14:58.358 "name": null, 00:14:58.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.358 "is_configured": false, 00:14:58.358 "data_offset": 0, 00:14:58.358 "data_size": 63488 00:14:58.358 }, 00:14:58.358 { 00:14:58.358 "name": "BaseBdev2", 00:14:58.358 "uuid": "579d1620-efaa-587c-946e-7bd1f7b08e3d", 00:14:58.358 "is_configured": true, 00:14:58.358 "data_offset": 2048, 00:14:58.358 "data_size": 63488 00:14:58.358 }, 00:14:58.358 { 00:14:58.358 "name": "BaseBdev3", 00:14:58.358 "uuid": "528561d8-8320-588f-ae09-809a71c6212c", 00:14:58.358 "is_configured": true, 00:14:58.358 "data_offset": 2048, 00:14:58.358 "data_size": 63488 00:14:58.358 }, 00:14:58.358 { 00:14:58.358 "name": "BaseBdev4", 00:14:58.358 "uuid": "8f6cc231-ca0b-5686-8da4-1acc0360c609", 00:14:58.358 "is_configured": true, 00:14:58.359 "data_offset": 2048, 00:14:58.359 "data_size": 63488 00:14:58.359 } 00:14:58.359 ] 00:14:58.359 }' 00:14:58.359 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95191 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 95191 ']' 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 95191 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95191 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.618 killing process with pid 95191 00:14:58.618 Received shutdown signal, test time was about 60.000000 seconds 00:14:58.618 00:14:58.618 Latency(us) 00:14:58.618 [2024-12-07T01:59:04.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:58.618 [2024-12-07T01:59:04.080Z] =================================================================================================================== 00:14:58.618 [2024-12-07T01:59:04.080Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95191' 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 95191 00:14:58.618 [2024-12-07 01:59:03.882697] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:58.618 [2024-12-07 01:59:03.882812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.618 01:59:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 95191 00:14:58.618 [2024-12-07 01:59:03.882887] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.618 [2024-12-07 01:59:03.882897] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:58.618 [2024-12-07 01:59:03.933566] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.878 01:59:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:58.878 00:14:58.878 real 0m25.059s 00:14:58.878 user 0m31.813s 00:14:58.878 sys 0m3.037s 00:14:58.878 01:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.879 01:59:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.879 ************************************ 00:14:58.879 END TEST raid5f_rebuild_test_sb 00:14:58.879 ************************************ 00:14:58.879 01:59:04 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:58.879 01:59:04 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:58.879 01:59:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:14:58.879 01:59:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:58.879 01:59:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:58.879 ************************************ 00:14:58.879 START TEST raid_state_function_test_sb_4k 00:14:58.879 ************************************ 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=95986 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95986' 00:14:58.879 Process raid pid: 95986 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 95986 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 95986 ']' 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:58.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.879 01:59:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.138 [2024-12-07 01:59:04.345014] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:59.138 [2024-12-07 01:59:04.345275] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.138 [2024-12-07 01:59:04.494818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.138 [2024-12-07 01:59:04.540862] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.138 [2024-12-07 01:59:04.583177] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.138 [2024-12-07 01:59:04.583217] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.707 [2024-12-07 01:59:05.153316] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.707 [2024-12-07 01:59:05.153365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.707 [2024-12-07 01:59:05.153378] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.707 [2024-12-07 01:59:05.153388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.707 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.966 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.966 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.966 "name": "Existed_Raid", 00:14:59.966 "uuid": "7a699045-48e9-42af-8d67-ce534740b817", 00:14:59.966 "strip_size_kb": 0, 00:14:59.966 "state": "configuring", 00:14:59.966 "raid_level": "raid1", 00:14:59.966 "superblock": true, 00:14:59.966 "num_base_bdevs": 2, 00:14:59.966 "num_base_bdevs_discovered": 0, 00:14:59.966 "num_base_bdevs_operational": 2, 00:14:59.966 "base_bdevs_list": [ 00:14:59.966 { 00:14:59.966 "name": "BaseBdev1", 00:14:59.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.966 "is_configured": false, 00:14:59.966 "data_offset": 0, 00:14:59.966 "data_size": 0 00:14:59.966 }, 00:14:59.966 { 00:14:59.966 "name": "BaseBdev2", 00:14:59.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.966 "is_configured": false, 00:14:59.966 "data_offset": 0, 00:14:59.966 "data_size": 0 00:14:59.966 } 00:14:59.966 ] 00:14:59.966 }' 00:14:59.966 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.966 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.223 [2024-12-07 01:59:05.612450] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.223 [2024-12-07 01:59:05.612553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.223 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.223 [2024-12-07 01:59:05.624432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.224 [2024-12-07 01:59:05.624506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.224 [2024-12-07 01:59:05.624545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.224 [2024-12-07 01:59:05.624567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.224 [2024-12-07 01:59:05.645098] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.224 BaseBdev1 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.224 [ 00:15:00.224 { 00:15:00.224 "name": "BaseBdev1", 00:15:00.224 "aliases": [ 00:15:00.224 "f656f809-ca38-48d0-8520-77a35cc3fdce" 00:15:00.224 ], 00:15:00.224 "product_name": "Malloc disk", 00:15:00.224 "block_size": 4096, 00:15:00.224 "num_blocks": 8192, 00:15:00.224 "uuid": "f656f809-ca38-48d0-8520-77a35cc3fdce", 00:15:00.224 "assigned_rate_limits": { 00:15:00.224 "rw_ios_per_sec": 0, 00:15:00.224 "rw_mbytes_per_sec": 0, 00:15:00.224 "r_mbytes_per_sec": 0, 00:15:00.224 "w_mbytes_per_sec": 0 00:15:00.224 }, 00:15:00.224 "claimed": true, 00:15:00.224 "claim_type": "exclusive_write", 00:15:00.224 "zoned": false, 00:15:00.224 "supported_io_types": { 00:15:00.224 "read": true, 00:15:00.224 "write": true, 00:15:00.224 "unmap": true, 00:15:00.224 "flush": true, 00:15:00.224 "reset": true, 00:15:00.224 "nvme_admin": false, 00:15:00.224 "nvme_io": false, 00:15:00.224 "nvme_io_md": false, 00:15:00.224 "write_zeroes": true, 00:15:00.224 "zcopy": true, 00:15:00.224 "get_zone_info": false, 00:15:00.224 "zone_management": false, 00:15:00.224 "zone_append": false, 00:15:00.224 "compare": false, 00:15:00.224 "compare_and_write": false, 00:15:00.224 "abort": true, 00:15:00.224 "seek_hole": false, 00:15:00.224 "seek_data": false, 00:15:00.224 "copy": true, 00:15:00.224 "nvme_iov_md": false 00:15:00.224 }, 00:15:00.224 "memory_domains": [ 00:15:00.224 { 00:15:00.224 "dma_device_id": "system", 00:15:00.224 "dma_device_type": 1 00:15:00.224 }, 00:15:00.224 { 00:15:00.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.224 "dma_device_type": 2 00:15:00.224 } 00:15:00.224 ], 00:15:00.224 "driver_specific": {} 00:15:00.224 } 00:15:00.224 ] 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.224 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.482 "name": "Existed_Raid", 00:15:00.482 "uuid": "2837c174-f8f1-4a54-99c0-c9ee66b3efad", 00:15:00.482 "strip_size_kb": 0, 00:15:00.482 "state": "configuring", 00:15:00.482 "raid_level": "raid1", 00:15:00.482 "superblock": true, 00:15:00.482 "num_base_bdevs": 2, 00:15:00.482 "num_base_bdevs_discovered": 1, 00:15:00.482 "num_base_bdevs_operational": 2, 00:15:00.482 "base_bdevs_list": [ 00:15:00.482 { 00:15:00.482 "name": "BaseBdev1", 00:15:00.482 "uuid": "f656f809-ca38-48d0-8520-77a35cc3fdce", 00:15:00.482 "is_configured": true, 00:15:00.482 "data_offset": 256, 00:15:00.482 "data_size": 7936 00:15:00.482 }, 00:15:00.482 { 00:15:00.482 "name": "BaseBdev2", 00:15:00.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.482 "is_configured": false, 00:15:00.482 "data_offset": 0, 00:15:00.482 "data_size": 0 00:15:00.482 } 00:15:00.482 ] 00:15:00.482 }' 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.482 01:59:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.741 [2024-12-07 01:59:06.128337] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.741 [2024-12-07 01:59:06.128471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.741 [2024-12-07 01:59:06.140363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.741 [2024-12-07 01:59:06.142208] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.741 [2024-12-07 01:59:06.142252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.741 "name": "Existed_Raid", 00:15:00.741 "uuid": "26ebd4fc-1a20-41f3-ab9e-48f6318e2500", 00:15:00.741 "strip_size_kb": 0, 00:15:00.741 "state": "configuring", 00:15:00.741 "raid_level": "raid1", 00:15:00.741 "superblock": true, 00:15:00.741 "num_base_bdevs": 2, 00:15:00.741 "num_base_bdevs_discovered": 1, 00:15:00.741 "num_base_bdevs_operational": 2, 00:15:00.741 "base_bdevs_list": [ 00:15:00.741 { 00:15:00.741 "name": "BaseBdev1", 00:15:00.741 "uuid": "f656f809-ca38-48d0-8520-77a35cc3fdce", 00:15:00.741 "is_configured": true, 00:15:00.741 "data_offset": 256, 00:15:00.741 "data_size": 7936 00:15:00.741 }, 00:15:00.741 { 00:15:00.741 "name": "BaseBdev2", 00:15:00.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.741 "is_configured": false, 00:15:00.741 "data_offset": 0, 00:15:00.741 "data_size": 0 00:15:00.741 } 00:15:00.741 ] 00:15:00.741 }' 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.741 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 [2024-12-07 01:59:06.630213] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.309 [2024-12-07 01:59:06.630570] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:01.309 [2024-12-07 01:59:06.630630] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:01.309 [2024-12-07 01:59:06.630972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:01.309 [2024-12-07 01:59:06.631183] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:01.309 [2024-12-07 01:59:06.631244] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:01.309 BaseBdev2 00:15:01.309 [2024-12-07 01:59:06.631433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 [ 00:15:01.309 { 00:15:01.309 "name": "BaseBdev2", 00:15:01.309 "aliases": [ 00:15:01.309 "3193383a-e90e-408c-b8ea-a8d0f54bee40" 00:15:01.309 ], 00:15:01.309 "product_name": "Malloc disk", 00:15:01.309 "block_size": 4096, 00:15:01.309 "num_blocks": 8192, 00:15:01.309 "uuid": "3193383a-e90e-408c-b8ea-a8d0f54bee40", 00:15:01.309 "assigned_rate_limits": { 00:15:01.309 "rw_ios_per_sec": 0, 00:15:01.309 "rw_mbytes_per_sec": 0, 00:15:01.309 "r_mbytes_per_sec": 0, 00:15:01.309 "w_mbytes_per_sec": 0 00:15:01.309 }, 00:15:01.309 "claimed": true, 00:15:01.309 "claim_type": "exclusive_write", 00:15:01.309 "zoned": false, 00:15:01.309 "supported_io_types": { 00:15:01.309 "read": true, 00:15:01.309 "write": true, 00:15:01.309 "unmap": true, 00:15:01.309 "flush": true, 00:15:01.309 "reset": true, 00:15:01.309 "nvme_admin": false, 00:15:01.309 "nvme_io": false, 00:15:01.309 "nvme_io_md": false, 00:15:01.309 "write_zeroes": true, 00:15:01.309 "zcopy": true, 00:15:01.309 "get_zone_info": false, 00:15:01.309 "zone_management": false, 00:15:01.309 "zone_append": false, 00:15:01.309 "compare": false, 00:15:01.309 "compare_and_write": false, 00:15:01.309 "abort": true, 00:15:01.309 "seek_hole": false, 00:15:01.309 "seek_data": false, 00:15:01.309 "copy": true, 00:15:01.309 "nvme_iov_md": false 00:15:01.309 }, 00:15:01.309 "memory_domains": [ 00:15:01.309 { 00:15:01.309 "dma_device_id": "system", 00:15:01.309 "dma_device_type": 1 00:15:01.309 }, 00:15:01.309 { 00:15:01.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.309 "dma_device_type": 2 00:15:01.309 } 00:15:01.309 ], 00:15:01.309 "driver_specific": {} 00:15:01.309 } 00:15:01.309 ] 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.309 "name": "Existed_Raid", 00:15:01.309 "uuid": "26ebd4fc-1a20-41f3-ab9e-48f6318e2500", 00:15:01.309 "strip_size_kb": 0, 00:15:01.309 "state": "online", 00:15:01.309 "raid_level": "raid1", 00:15:01.309 "superblock": true, 00:15:01.309 "num_base_bdevs": 2, 00:15:01.309 "num_base_bdevs_discovered": 2, 00:15:01.309 "num_base_bdevs_operational": 2, 00:15:01.309 "base_bdevs_list": [ 00:15:01.309 { 00:15:01.309 "name": "BaseBdev1", 00:15:01.309 "uuid": "f656f809-ca38-48d0-8520-77a35cc3fdce", 00:15:01.309 "is_configured": true, 00:15:01.309 "data_offset": 256, 00:15:01.309 "data_size": 7936 00:15:01.309 }, 00:15:01.309 { 00:15:01.309 "name": "BaseBdev2", 00:15:01.309 "uuid": "3193383a-e90e-408c-b8ea-a8d0f54bee40", 00:15:01.309 "is_configured": true, 00:15:01.309 "data_offset": 256, 00:15:01.309 "data_size": 7936 00:15:01.309 } 00:15:01.309 ] 00:15:01.309 }' 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.309 01:59:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.872 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.872 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:01.872 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.872 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.872 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.873 [2024-12-07 01:59:07.153620] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.873 "name": "Existed_Raid", 00:15:01.873 "aliases": [ 00:15:01.873 "26ebd4fc-1a20-41f3-ab9e-48f6318e2500" 00:15:01.873 ], 00:15:01.873 "product_name": "Raid Volume", 00:15:01.873 "block_size": 4096, 00:15:01.873 "num_blocks": 7936, 00:15:01.873 "uuid": "26ebd4fc-1a20-41f3-ab9e-48f6318e2500", 00:15:01.873 "assigned_rate_limits": { 00:15:01.873 "rw_ios_per_sec": 0, 00:15:01.873 "rw_mbytes_per_sec": 0, 00:15:01.873 "r_mbytes_per_sec": 0, 00:15:01.873 "w_mbytes_per_sec": 0 00:15:01.873 }, 00:15:01.873 "claimed": false, 00:15:01.873 "zoned": false, 00:15:01.873 "supported_io_types": { 00:15:01.873 "read": true, 00:15:01.873 "write": true, 00:15:01.873 "unmap": false, 00:15:01.873 "flush": false, 00:15:01.873 "reset": true, 00:15:01.873 "nvme_admin": false, 00:15:01.873 "nvme_io": false, 00:15:01.873 "nvme_io_md": false, 00:15:01.873 "write_zeroes": true, 00:15:01.873 "zcopy": false, 00:15:01.873 "get_zone_info": false, 00:15:01.873 "zone_management": false, 00:15:01.873 "zone_append": false, 00:15:01.873 "compare": false, 00:15:01.873 "compare_and_write": false, 00:15:01.873 "abort": false, 00:15:01.873 "seek_hole": false, 00:15:01.873 "seek_data": false, 00:15:01.873 "copy": false, 00:15:01.873 "nvme_iov_md": false 00:15:01.873 }, 00:15:01.873 "memory_domains": [ 00:15:01.873 { 00:15:01.873 "dma_device_id": "system", 00:15:01.873 "dma_device_type": 1 00:15:01.873 }, 00:15:01.873 { 00:15:01.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.873 "dma_device_type": 2 00:15:01.873 }, 00:15:01.873 { 00:15:01.873 "dma_device_id": "system", 00:15:01.873 "dma_device_type": 1 00:15:01.873 }, 00:15:01.873 { 00:15:01.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.873 "dma_device_type": 2 00:15:01.873 } 00:15:01.873 ], 00:15:01.873 "driver_specific": { 00:15:01.873 "raid": { 00:15:01.873 "uuid": "26ebd4fc-1a20-41f3-ab9e-48f6318e2500", 00:15:01.873 "strip_size_kb": 0, 00:15:01.873 "state": "online", 00:15:01.873 "raid_level": "raid1", 00:15:01.873 "superblock": true, 00:15:01.873 "num_base_bdevs": 2, 00:15:01.873 "num_base_bdevs_discovered": 2, 00:15:01.873 "num_base_bdevs_operational": 2, 00:15:01.873 "base_bdevs_list": [ 00:15:01.873 { 00:15:01.873 "name": "BaseBdev1", 00:15:01.873 "uuid": "f656f809-ca38-48d0-8520-77a35cc3fdce", 00:15:01.873 "is_configured": true, 00:15:01.873 "data_offset": 256, 00:15:01.873 "data_size": 7936 00:15:01.873 }, 00:15:01.873 { 00:15:01.873 "name": "BaseBdev2", 00:15:01.873 "uuid": "3193383a-e90e-408c-b8ea-a8d0f54bee40", 00:15:01.873 "is_configured": true, 00:15:01.873 "data_offset": 256, 00:15:01.873 "data_size": 7936 00:15:01.873 } 00:15:01.873 ] 00:15:01.873 } 00:15:01.873 } 00:15:01.873 }' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:01.873 BaseBdev2' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.873 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.131 [2024-12-07 01:59:07.361060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.131 "name": "Existed_Raid", 00:15:02.131 "uuid": "26ebd4fc-1a20-41f3-ab9e-48f6318e2500", 00:15:02.131 "strip_size_kb": 0, 00:15:02.131 "state": "online", 00:15:02.131 "raid_level": "raid1", 00:15:02.131 "superblock": true, 00:15:02.131 "num_base_bdevs": 2, 00:15:02.131 "num_base_bdevs_discovered": 1, 00:15:02.131 "num_base_bdevs_operational": 1, 00:15:02.131 "base_bdevs_list": [ 00:15:02.131 { 00:15:02.131 "name": null, 00:15:02.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.131 "is_configured": false, 00:15:02.131 "data_offset": 0, 00:15:02.131 "data_size": 7936 00:15:02.131 }, 00:15:02.131 { 00:15:02.131 "name": "BaseBdev2", 00:15:02.131 "uuid": "3193383a-e90e-408c-b8ea-a8d0f54bee40", 00:15:02.131 "is_configured": true, 00:15:02.131 "data_offset": 256, 00:15:02.131 "data_size": 7936 00:15:02.131 } 00:15:02.131 ] 00:15:02.131 }' 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.131 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 [2024-12-07 01:59:07.907568] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:02.697 [2024-12-07 01:59:07.907692] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.697 [2024-12-07 01:59:07.919252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.697 [2024-12-07 01:59:07.919379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.697 [2024-12-07 01:59:07.919420] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 95986 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 95986 ']' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 95986 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.697 01:59:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95986 00:15:02.697 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.697 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.697 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95986' 00:15:02.697 killing process with pid 95986 00:15:02.697 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 95986 00:15:02.697 [2024-12-07 01:59:08.018127] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:02.697 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 95986 00:15:02.697 [2024-12-07 01:59:08.019137] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.955 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:15:02.955 00:15:02.955 real 0m4.024s 00:15:02.955 user 0m6.317s 00:15:02.955 sys 0m0.873s 00:15:02.955 ************************************ 00:15:02.955 END TEST raid_state_function_test_sb_4k 00:15:02.955 ************************************ 00:15:02.955 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:02.955 01:59:08 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.955 01:59:08 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:15:02.955 01:59:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:02.955 01:59:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:02.955 01:59:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.955 ************************************ 00:15:02.955 START TEST raid_superblock_test_4k 00:15:02.955 ************************************ 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:02.955 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96227 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96227 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 96227 ']' 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:02.956 01:59:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 [2024-12-07 01:59:08.419850] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:03.214 [2024-12-07 01:59:08.420059] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96227 ] 00:15:03.214 [2024-12-07 01:59:08.562899] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.214 [2024-12-07 01:59:08.612107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.214 [2024-12-07 01:59:08.653659] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.214 [2024-12-07 01:59:08.653788] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.779 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:03.779 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:15:03.779 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:03.779 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.779 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:03.779 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 malloc1 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 [2024-12-07 01:59:09.264360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:04.038 [2024-12-07 01:59:09.264422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.038 [2024-12-07 01:59:09.264444] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:04.038 [2024-12-07 01:59:09.264456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.038 [2024-12-07 01:59:09.266566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.038 [2024-12-07 01:59:09.266606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:04.038 pt1 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 malloc2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 [2024-12-07 01:59:09.303720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:04.038 [2024-12-07 01:59:09.303829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.038 [2024-12-07 01:59:09.303861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.038 [2024-12-07 01:59:09.303891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.038 [2024-12-07 01:59:09.306010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.038 [2024-12-07 01:59:09.306076] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:04.038 pt2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 [2024-12-07 01:59:09.315741] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.038 [2024-12-07 01:59:09.317632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.038 [2024-12-07 01:59:09.317830] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:04.038 [2024-12-07 01:59:09.317882] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:04.038 [2024-12-07 01:59:09.318164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:04.038 [2024-12-07 01:59:09.318337] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:04.038 [2024-12-07 01:59:09.318378] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:04.038 [2024-12-07 01:59:09.318537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.038 "name": "raid_bdev1", 00:15:04.038 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:04.038 "strip_size_kb": 0, 00:15:04.038 "state": "online", 00:15:04.038 "raid_level": "raid1", 00:15:04.038 "superblock": true, 00:15:04.038 "num_base_bdevs": 2, 00:15:04.038 "num_base_bdevs_discovered": 2, 00:15:04.038 "num_base_bdevs_operational": 2, 00:15:04.038 "base_bdevs_list": [ 00:15:04.038 { 00:15:04.038 "name": "pt1", 00:15:04.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.038 "is_configured": true, 00:15:04.038 "data_offset": 256, 00:15:04.038 "data_size": 7936 00:15:04.038 }, 00:15:04.038 { 00:15:04.038 "name": "pt2", 00:15:04.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.038 "is_configured": true, 00:15:04.038 "data_offset": 256, 00:15:04.038 "data_size": 7936 00:15:04.038 } 00:15:04.038 ] 00:15:04.038 }' 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.038 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.297 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 [2024-12-07 01:59:09.759300] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.557 "name": "raid_bdev1", 00:15:04.557 "aliases": [ 00:15:04.557 "fa011143-b9d5-46c8-86ee-60db0e8fa5fa" 00:15:04.557 ], 00:15:04.557 "product_name": "Raid Volume", 00:15:04.557 "block_size": 4096, 00:15:04.557 "num_blocks": 7936, 00:15:04.557 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:04.557 "assigned_rate_limits": { 00:15:04.557 "rw_ios_per_sec": 0, 00:15:04.557 "rw_mbytes_per_sec": 0, 00:15:04.557 "r_mbytes_per_sec": 0, 00:15:04.557 "w_mbytes_per_sec": 0 00:15:04.557 }, 00:15:04.557 "claimed": false, 00:15:04.557 "zoned": false, 00:15:04.557 "supported_io_types": { 00:15:04.557 "read": true, 00:15:04.557 "write": true, 00:15:04.557 "unmap": false, 00:15:04.557 "flush": false, 00:15:04.557 "reset": true, 00:15:04.557 "nvme_admin": false, 00:15:04.557 "nvme_io": false, 00:15:04.557 "nvme_io_md": false, 00:15:04.557 "write_zeroes": true, 00:15:04.557 "zcopy": false, 00:15:04.557 "get_zone_info": false, 00:15:04.557 "zone_management": false, 00:15:04.557 "zone_append": false, 00:15:04.557 "compare": false, 00:15:04.557 "compare_and_write": false, 00:15:04.557 "abort": false, 00:15:04.557 "seek_hole": false, 00:15:04.557 "seek_data": false, 00:15:04.557 "copy": false, 00:15:04.557 "nvme_iov_md": false 00:15:04.557 }, 00:15:04.557 "memory_domains": [ 00:15:04.557 { 00:15:04.557 "dma_device_id": "system", 00:15:04.557 "dma_device_type": 1 00:15:04.557 }, 00:15:04.557 { 00:15:04.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.557 "dma_device_type": 2 00:15:04.557 }, 00:15:04.557 { 00:15:04.557 "dma_device_id": "system", 00:15:04.557 "dma_device_type": 1 00:15:04.557 }, 00:15:04.557 { 00:15:04.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.557 "dma_device_type": 2 00:15:04.557 } 00:15:04.557 ], 00:15:04.557 "driver_specific": { 00:15:04.557 "raid": { 00:15:04.557 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:04.557 "strip_size_kb": 0, 00:15:04.557 "state": "online", 00:15:04.557 "raid_level": "raid1", 00:15:04.557 "superblock": true, 00:15:04.557 "num_base_bdevs": 2, 00:15:04.557 "num_base_bdevs_discovered": 2, 00:15:04.557 "num_base_bdevs_operational": 2, 00:15:04.557 "base_bdevs_list": [ 00:15:04.557 { 00:15:04.557 "name": "pt1", 00:15:04.557 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.557 "is_configured": true, 00:15:04.557 "data_offset": 256, 00:15:04.557 "data_size": 7936 00:15:04.557 }, 00:15:04.557 { 00:15:04.557 "name": "pt2", 00:15:04.557 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.557 "is_configured": true, 00:15:04.557 "data_offset": 256, 00:15:04.557 "data_size": 7936 00:15:04.557 } 00:15:04.557 ] 00:15:04.557 } 00:15:04.557 } 00:15:04.557 }' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:04.557 pt2' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:04.557 01:59:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.557 [2024-12-07 01:59:09.994835] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.557 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.816 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fa011143-b9d5-46c8-86ee-60db0e8fa5fa 00:15:04.816 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z fa011143-b9d5-46c8-86ee-60db0e8fa5fa ']' 00:15:04.816 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.816 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 [2024-12-07 01:59:10.034488] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.817 [2024-12-07 01:59:10.034521] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.817 [2024-12-07 01:59:10.034601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.817 [2024-12-07 01:59:10.034681] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.817 [2024-12-07 01:59:10.034700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 [2024-12-07 01:59:10.158307] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:04.817 [2024-12-07 01:59:10.160281] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:04.817 [2024-12-07 01:59:10.160352] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:04.817 [2024-12-07 01:59:10.160401] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:04.817 [2024-12-07 01:59:10.160417] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.817 [2024-12-07 01:59:10.160427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:04.817 request: 00:15:04.817 { 00:15:04.817 "name": "raid_bdev1", 00:15:04.817 "raid_level": "raid1", 00:15:04.817 "base_bdevs": [ 00:15:04.817 "malloc1", 00:15:04.817 "malloc2" 00:15:04.817 ], 00:15:04.817 "superblock": false, 00:15:04.817 "method": "bdev_raid_create", 00:15:04.817 "req_id": 1 00:15:04.817 } 00:15:04.817 Got JSON-RPC error response 00:15:04.817 response: 00:15:04.817 { 00:15:04.817 "code": -17, 00:15:04.817 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:04.817 } 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 [2024-12-07 01:59:10.226140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:04.817 [2024-12-07 01:59:10.226261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.817 [2024-12-07 01:59:10.226323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:04.817 [2024-12-07 01:59:10.226361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.817 [2024-12-07 01:59:10.228602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.817 [2024-12-07 01:59:10.228682] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:04.817 [2024-12-07 01:59:10.228818] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:04.817 [2024-12-07 01:59:10.228891] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.817 pt1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.817 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.075 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.075 "name": "raid_bdev1", 00:15:05.075 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:05.075 "strip_size_kb": 0, 00:15:05.075 "state": "configuring", 00:15:05.075 "raid_level": "raid1", 00:15:05.075 "superblock": true, 00:15:05.075 "num_base_bdevs": 2, 00:15:05.075 "num_base_bdevs_discovered": 1, 00:15:05.075 "num_base_bdevs_operational": 2, 00:15:05.075 "base_bdevs_list": [ 00:15:05.075 { 00:15:05.075 "name": "pt1", 00:15:05.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.075 "is_configured": true, 00:15:05.075 "data_offset": 256, 00:15:05.075 "data_size": 7936 00:15:05.075 }, 00:15:05.075 { 00:15:05.075 "name": null, 00:15:05.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.075 "is_configured": false, 00:15:05.075 "data_offset": 256, 00:15:05.075 "data_size": 7936 00:15:05.075 } 00:15:05.075 ] 00:15:05.075 }' 00:15:05.075 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.075 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.337 [2024-12-07 01:59:10.685370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.337 [2024-12-07 01:59:10.685498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.337 [2024-12-07 01:59:10.685538] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:05.337 [2024-12-07 01:59:10.685565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.337 [2024-12-07 01:59:10.686032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.337 [2024-12-07 01:59:10.686088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.337 [2024-12-07 01:59:10.686195] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.337 [2024-12-07 01:59:10.686245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.337 [2024-12-07 01:59:10.686371] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:05.337 [2024-12-07 01:59:10.686409] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:05.337 [2024-12-07 01:59:10.686689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:05.337 [2024-12-07 01:59:10.686839] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:05.337 [2024-12-07 01:59:10.686882] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:05.337 [2024-12-07 01:59:10.687028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.337 pt2 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.337 "name": "raid_bdev1", 00:15:05.337 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:05.337 "strip_size_kb": 0, 00:15:05.337 "state": "online", 00:15:05.337 "raid_level": "raid1", 00:15:05.337 "superblock": true, 00:15:05.337 "num_base_bdevs": 2, 00:15:05.337 "num_base_bdevs_discovered": 2, 00:15:05.337 "num_base_bdevs_operational": 2, 00:15:05.337 "base_bdevs_list": [ 00:15:05.337 { 00:15:05.337 "name": "pt1", 00:15:05.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.337 "is_configured": true, 00:15:05.337 "data_offset": 256, 00:15:05.337 "data_size": 7936 00:15:05.337 }, 00:15:05.337 { 00:15:05.337 "name": "pt2", 00:15:05.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.337 "is_configured": true, 00:15:05.337 "data_offset": 256, 00:15:05.337 "data_size": 7936 00:15:05.337 } 00:15:05.337 ] 00:15:05.337 }' 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.337 01:59:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 [2024-12-07 01:59:11.120895] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:05.903 "name": "raid_bdev1", 00:15:05.903 "aliases": [ 00:15:05.903 "fa011143-b9d5-46c8-86ee-60db0e8fa5fa" 00:15:05.903 ], 00:15:05.903 "product_name": "Raid Volume", 00:15:05.903 "block_size": 4096, 00:15:05.903 "num_blocks": 7936, 00:15:05.903 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:05.903 "assigned_rate_limits": { 00:15:05.903 "rw_ios_per_sec": 0, 00:15:05.903 "rw_mbytes_per_sec": 0, 00:15:05.903 "r_mbytes_per_sec": 0, 00:15:05.903 "w_mbytes_per_sec": 0 00:15:05.903 }, 00:15:05.903 "claimed": false, 00:15:05.903 "zoned": false, 00:15:05.903 "supported_io_types": { 00:15:05.903 "read": true, 00:15:05.903 "write": true, 00:15:05.903 "unmap": false, 00:15:05.903 "flush": false, 00:15:05.903 "reset": true, 00:15:05.903 "nvme_admin": false, 00:15:05.903 "nvme_io": false, 00:15:05.903 "nvme_io_md": false, 00:15:05.903 "write_zeroes": true, 00:15:05.903 "zcopy": false, 00:15:05.903 "get_zone_info": false, 00:15:05.903 "zone_management": false, 00:15:05.903 "zone_append": false, 00:15:05.903 "compare": false, 00:15:05.903 "compare_and_write": false, 00:15:05.903 "abort": false, 00:15:05.903 "seek_hole": false, 00:15:05.903 "seek_data": false, 00:15:05.903 "copy": false, 00:15:05.903 "nvme_iov_md": false 00:15:05.903 }, 00:15:05.903 "memory_domains": [ 00:15:05.903 { 00:15:05.903 "dma_device_id": "system", 00:15:05.903 "dma_device_type": 1 00:15:05.903 }, 00:15:05.903 { 00:15:05.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.903 "dma_device_type": 2 00:15:05.903 }, 00:15:05.903 { 00:15:05.903 "dma_device_id": "system", 00:15:05.903 "dma_device_type": 1 00:15:05.903 }, 00:15:05.903 { 00:15:05.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.903 "dma_device_type": 2 00:15:05.903 } 00:15:05.903 ], 00:15:05.903 "driver_specific": { 00:15:05.903 "raid": { 00:15:05.903 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:05.903 "strip_size_kb": 0, 00:15:05.903 "state": "online", 00:15:05.903 "raid_level": "raid1", 00:15:05.903 "superblock": true, 00:15:05.903 "num_base_bdevs": 2, 00:15:05.903 "num_base_bdevs_discovered": 2, 00:15:05.903 "num_base_bdevs_operational": 2, 00:15:05.903 "base_bdevs_list": [ 00:15:05.903 { 00:15:05.903 "name": "pt1", 00:15:05.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.903 "is_configured": true, 00:15:05.903 "data_offset": 256, 00:15:05.903 "data_size": 7936 00:15:05.903 }, 00:15:05.903 { 00:15:05.903 "name": "pt2", 00:15:05.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.903 "is_configured": true, 00:15:05.903 "data_offset": 256, 00:15:05.903 "data_size": 7936 00:15:05.903 } 00:15:05.903 ] 00:15:05.903 } 00:15:05.903 } 00:15:05.903 }' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:05.903 pt2' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:05.903 [2024-12-07 01:59:11.348444] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.903 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' fa011143-b9d5-46c8-86ee-60db0e8fa5fa '!=' fa011143-b9d5-46c8-86ee-60db0e8fa5fa ']' 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.162 [2024-12-07 01:59:11.384211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.162 "name": "raid_bdev1", 00:15:06.162 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:06.162 "strip_size_kb": 0, 00:15:06.162 "state": "online", 00:15:06.162 "raid_level": "raid1", 00:15:06.162 "superblock": true, 00:15:06.162 "num_base_bdevs": 2, 00:15:06.162 "num_base_bdevs_discovered": 1, 00:15:06.162 "num_base_bdevs_operational": 1, 00:15:06.162 "base_bdevs_list": [ 00:15:06.162 { 00:15:06.162 "name": null, 00:15:06.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.162 "is_configured": false, 00:15:06.162 "data_offset": 0, 00:15:06.162 "data_size": 7936 00:15:06.162 }, 00:15:06.162 { 00:15:06.162 "name": "pt2", 00:15:06.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.162 "is_configured": true, 00:15:06.162 "data_offset": 256, 00:15:06.162 "data_size": 7936 00:15:06.162 } 00:15:06.162 ] 00:15:06.162 }' 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.162 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.420 [2024-12-07 01:59:11.827625] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.420 [2024-12-07 01:59:11.827658] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.420 [2024-12-07 01:59:11.827768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.420 [2024-12-07 01:59:11.827871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.420 [2024-12-07 01:59:11.827883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.420 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.678 [2024-12-07 01:59:11.903454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.678 [2024-12-07 01:59:11.903527] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.678 [2024-12-07 01:59:11.903549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:06.678 [2024-12-07 01:59:11.903557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.678 [2024-12-07 01:59:11.905775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.678 [2024-12-07 01:59:11.905855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.678 [2024-12-07 01:59:11.905938] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:06.678 [2024-12-07 01:59:11.905974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.678 [2024-12-07 01:59:11.906075] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:06.678 [2024-12-07 01:59:11.906084] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:06.678 [2024-12-07 01:59:11.906312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:06.678 [2024-12-07 01:59:11.906422] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:06.678 [2024-12-07 01:59:11.906432] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:06.678 [2024-12-07 01:59:11.906535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.678 pt2 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.678 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.678 "name": "raid_bdev1", 00:15:06.678 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:06.678 "strip_size_kb": 0, 00:15:06.678 "state": "online", 00:15:06.678 "raid_level": "raid1", 00:15:06.678 "superblock": true, 00:15:06.678 "num_base_bdevs": 2, 00:15:06.678 "num_base_bdevs_discovered": 1, 00:15:06.678 "num_base_bdevs_operational": 1, 00:15:06.678 "base_bdevs_list": [ 00:15:06.679 { 00:15:06.679 "name": null, 00:15:06.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.679 "is_configured": false, 00:15:06.679 "data_offset": 256, 00:15:06.679 "data_size": 7936 00:15:06.679 }, 00:15:06.679 { 00:15:06.679 "name": "pt2", 00:15:06.679 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.679 "is_configured": true, 00:15:06.679 "data_offset": 256, 00:15:06.679 "data_size": 7936 00:15:06.679 } 00:15:06.679 ] 00:15:06.679 }' 00:15:06.679 01:59:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.679 01:59:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.937 [2024-12-07 01:59:12.366675] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:06.937 [2024-12-07 01:59:12.366749] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:06.937 [2024-12-07 01:59:12.366861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:06.937 [2024-12-07 01:59:12.366940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:06.937 [2024-12-07 01:59:12.366987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:06.937 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.251 [2024-12-07 01:59:12.426578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.251 [2024-12-07 01:59:12.426704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.251 [2024-12-07 01:59:12.426742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:07.251 [2024-12-07 01:59:12.426780] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.251 [2024-12-07 01:59:12.428964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.251 [2024-12-07 01:59:12.429033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.251 [2024-12-07 01:59:12.429127] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.251 [2024-12-07 01:59:12.429212] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.251 [2024-12-07 01:59:12.429339] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:07.251 [2024-12-07 01:59:12.429397] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.251 [2024-12-07 01:59:12.429433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:07.251 [2024-12-07 01:59:12.429512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.251 [2024-12-07 01:59:12.429619] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:07.251 [2024-12-07 01:59:12.429670] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:07.251 [2024-12-07 01:59:12.429910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:07.251 [2024-12-07 01:59:12.430062] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:07.251 [2024-12-07 01:59:12.430102] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:07.251 [2024-12-07 01:59:12.430254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.251 pt1 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.251 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.252 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.252 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.252 "name": "raid_bdev1", 00:15:07.252 "uuid": "fa011143-b9d5-46c8-86ee-60db0e8fa5fa", 00:15:07.252 "strip_size_kb": 0, 00:15:07.252 "state": "online", 00:15:07.252 "raid_level": "raid1", 00:15:07.252 "superblock": true, 00:15:07.252 "num_base_bdevs": 2, 00:15:07.252 "num_base_bdevs_discovered": 1, 00:15:07.252 "num_base_bdevs_operational": 1, 00:15:07.252 "base_bdevs_list": [ 00:15:07.252 { 00:15:07.252 "name": null, 00:15:07.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.252 "is_configured": false, 00:15:07.252 "data_offset": 256, 00:15:07.252 "data_size": 7936 00:15:07.252 }, 00:15:07.252 { 00:15:07.252 "name": "pt2", 00:15:07.252 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.252 "is_configured": true, 00:15:07.252 "data_offset": 256, 00:15:07.252 "data_size": 7936 00:15:07.252 } 00:15:07.252 ] 00:15:07.252 }' 00:15:07.252 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.252 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:07.534 [2024-12-07 01:59:12.874019] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' fa011143-b9d5-46c8-86ee-60db0e8fa5fa '!=' fa011143-b9d5-46c8-86ee-60db0e8fa5fa ']' 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96227 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 96227 ']' 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 96227 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96227 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:07.534 killing process with pid 96227 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:07.534 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96227' 00:15:07.535 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 96227 00:15:07.535 [2024-12-07 01:59:12.959679] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.535 [2024-12-07 01:59:12.959772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.535 [2024-12-07 01:59:12.959823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.535 [2024-12-07 01:59:12.959832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:07.535 01:59:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 96227 00:15:07.535 [2024-12-07 01:59:12.982887] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:07.794 01:59:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:15:07.794 00:15:07.794 real 0m4.893s 00:15:07.794 user 0m7.940s 00:15:07.794 sys 0m1.093s 00:15:07.794 01:59:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:07.794 01:59:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.794 ************************************ 00:15:07.794 END TEST raid_superblock_test_4k 00:15:07.794 ************************************ 00:15:08.055 01:59:13 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:15:08.055 01:59:13 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:15:08.055 01:59:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:08.055 01:59:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.055 01:59:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.055 ************************************ 00:15:08.055 START TEST raid_rebuild_test_sb_4k 00:15:08.055 ************************************ 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96539 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96539 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 96539 ']' 00:15:08.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.055 01:59:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.055 [2024-12-07 01:59:13.397337] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:08.055 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:08.055 Zero copy mechanism will not be used. 00:15:08.055 [2024-12-07 01:59:13.397550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96539 ] 00:15:08.315 [2024-12-07 01:59:13.523263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.315 [2024-12-07 01:59:13.567330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.315 [2024-12-07 01:59:13.609724] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.315 [2024-12-07 01:59:13.609759] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 BaseBdev1_malloc 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 [2024-12-07 01:59:14.235696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.886 [2024-12-07 01:59:14.235754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.886 [2024-12-07 01:59:14.235800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:08.886 [2024-12-07 01:59:14.235814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.886 [2024-12-07 01:59:14.237878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.886 [2024-12-07 01:59:14.237916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.886 BaseBdev1 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 BaseBdev2_malloc 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 [2024-12-07 01:59:14.283329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:08.886 [2024-12-07 01:59:14.283391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.886 [2024-12-07 01:59:14.283418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:08.886 [2024-12-07 01:59:14.283430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.886 [2024-12-07 01:59:14.286282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.886 [2024-12-07 01:59:14.286379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:08.886 BaseBdev2 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 spare_malloc 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 spare_delay 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 [2024-12-07 01:59:14.323572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.886 [2024-12-07 01:59:14.323688] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.886 [2024-12-07 01:59:14.323716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:08.886 [2024-12-07 01:59:14.323725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.886 [2024-12-07 01:59:14.325790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.886 [2024-12-07 01:59:14.325822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.886 spare 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.886 [2024-12-07 01:59:14.335600] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.886 [2024-12-07 01:59:14.337501] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:08.886 [2024-12-07 01:59:14.337663] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:08.886 [2024-12-07 01:59:14.337689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:08.886 [2024-12-07 01:59:14.337952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:08.886 [2024-12-07 01:59:14.338073] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:08.886 [2024-12-07 01:59:14.338091] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:08.886 [2024-12-07 01:59:14.338226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.886 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.887 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.147 "name": "raid_bdev1", 00:15:09.147 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:09.147 "strip_size_kb": 0, 00:15:09.147 "state": "online", 00:15:09.147 "raid_level": "raid1", 00:15:09.147 "superblock": true, 00:15:09.147 "num_base_bdevs": 2, 00:15:09.147 "num_base_bdevs_discovered": 2, 00:15:09.147 "num_base_bdevs_operational": 2, 00:15:09.147 "base_bdevs_list": [ 00:15:09.147 { 00:15:09.147 "name": "BaseBdev1", 00:15:09.147 "uuid": "7ae5e0e6-eace-50a2-a2e3-7f8a91bcb096", 00:15:09.147 "is_configured": true, 00:15:09.147 "data_offset": 256, 00:15:09.147 "data_size": 7936 00:15:09.147 }, 00:15:09.147 { 00:15:09.147 "name": "BaseBdev2", 00:15:09.147 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:09.147 "is_configured": true, 00:15:09.147 "data_offset": 256, 00:15:09.147 "data_size": 7936 00:15:09.147 } 00:15:09.147 ] 00:15:09.147 }' 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.147 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.407 [2024-12-07 01:59:14.783094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.407 01:59:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:09.667 [2024-12-07 01:59:15.038529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:09.667 /dev/nbd0 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:09.667 1+0 records in 00:15:09.667 1+0 records out 00:15:09.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209631 s, 19.5 MB/s 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:09.667 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:10.235 7936+0 records in 00:15:10.235 7936+0 records out 00:15:10.235 32505856 bytes (33 MB, 31 MiB) copied, 0.577844 s, 56.3 MB/s 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:10.235 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:10.494 [2024-12-07 01:59:15.869134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.494 [2024-12-07 01:59:15.901344] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.494 "name": "raid_bdev1", 00:15:10.494 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:10.494 "strip_size_kb": 0, 00:15:10.494 "state": "online", 00:15:10.494 "raid_level": "raid1", 00:15:10.494 "superblock": true, 00:15:10.494 "num_base_bdevs": 2, 00:15:10.494 "num_base_bdevs_discovered": 1, 00:15:10.494 "num_base_bdevs_operational": 1, 00:15:10.494 "base_bdevs_list": [ 00:15:10.494 { 00:15:10.494 "name": null, 00:15:10.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.494 "is_configured": false, 00:15:10.494 "data_offset": 0, 00:15:10.494 "data_size": 7936 00:15:10.494 }, 00:15:10.494 { 00:15:10.494 "name": "BaseBdev2", 00:15:10.494 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:10.494 "is_configured": true, 00:15:10.494 "data_offset": 256, 00:15:10.494 "data_size": 7936 00:15:10.494 } 00:15:10.494 ] 00:15:10.494 }' 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.494 01:59:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.063 01:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:11.063 01:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.063 01:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.063 [2024-12-07 01:59:16.352626] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:11.063 [2024-12-07 01:59:16.356846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:11.063 01:59:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.063 01:59:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:11.063 [2024-12-07 01:59:16.358809] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.004 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.005 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.005 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.005 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.005 "name": "raid_bdev1", 00:15:12.005 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:12.005 "strip_size_kb": 0, 00:15:12.005 "state": "online", 00:15:12.005 "raid_level": "raid1", 00:15:12.005 "superblock": true, 00:15:12.005 "num_base_bdevs": 2, 00:15:12.005 "num_base_bdevs_discovered": 2, 00:15:12.005 "num_base_bdevs_operational": 2, 00:15:12.005 "process": { 00:15:12.005 "type": "rebuild", 00:15:12.005 "target": "spare", 00:15:12.005 "progress": { 00:15:12.005 "blocks": 2560, 00:15:12.005 "percent": 32 00:15:12.005 } 00:15:12.005 }, 00:15:12.005 "base_bdevs_list": [ 00:15:12.005 { 00:15:12.005 "name": "spare", 00:15:12.005 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:12.005 "is_configured": true, 00:15:12.005 "data_offset": 256, 00:15:12.005 "data_size": 7936 00:15:12.005 }, 00:15:12.005 { 00:15:12.005 "name": "BaseBdev2", 00:15:12.005 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:12.005 "is_configured": true, 00:15:12.005 "data_offset": 256, 00:15:12.005 "data_size": 7936 00:15:12.005 } 00:15:12.005 ] 00:15:12.005 }' 00:15:12.005 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.005 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:12.005 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.265 [2024-12-07 01:59:17.519890] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.265 [2024-12-07 01:59:17.564298] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:12.265 [2024-12-07 01:59:17.564366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.265 [2024-12-07 01:59:17.564402] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:12.265 [2024-12-07 01:59:17.564409] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.265 "name": "raid_bdev1", 00:15:12.265 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:12.265 "strip_size_kb": 0, 00:15:12.265 "state": "online", 00:15:12.265 "raid_level": "raid1", 00:15:12.265 "superblock": true, 00:15:12.265 "num_base_bdevs": 2, 00:15:12.265 "num_base_bdevs_discovered": 1, 00:15:12.265 "num_base_bdevs_operational": 1, 00:15:12.265 "base_bdevs_list": [ 00:15:12.265 { 00:15:12.265 "name": null, 00:15:12.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.265 "is_configured": false, 00:15:12.265 "data_offset": 0, 00:15:12.265 "data_size": 7936 00:15:12.265 }, 00:15:12.265 { 00:15:12.265 "name": "BaseBdev2", 00:15:12.265 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:12.265 "is_configured": true, 00:15:12.265 "data_offset": 256, 00:15:12.265 "data_size": 7936 00:15:12.265 } 00:15:12.265 ] 00:15:12.265 }' 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.265 01:59:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.837 "name": "raid_bdev1", 00:15:12.837 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:12.837 "strip_size_kb": 0, 00:15:12.837 "state": "online", 00:15:12.837 "raid_level": "raid1", 00:15:12.837 "superblock": true, 00:15:12.837 "num_base_bdevs": 2, 00:15:12.837 "num_base_bdevs_discovered": 1, 00:15:12.837 "num_base_bdevs_operational": 1, 00:15:12.837 "base_bdevs_list": [ 00:15:12.837 { 00:15:12.837 "name": null, 00:15:12.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.837 "is_configured": false, 00:15:12.837 "data_offset": 0, 00:15:12.837 "data_size": 7936 00:15:12.837 }, 00:15:12.837 { 00:15:12.837 "name": "BaseBdev2", 00:15:12.837 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:12.837 "is_configured": true, 00:15:12.837 "data_offset": 256, 00:15:12.837 "data_size": 7936 00:15:12.837 } 00:15:12.837 ] 00:15:12.837 }' 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.837 [2024-12-07 01:59:18.156048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:12.837 [2024-12-07 01:59:18.160096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.837 01:59:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:12.837 [2024-12-07 01:59:18.162010] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.776 "name": "raid_bdev1", 00:15:13.776 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:13.776 "strip_size_kb": 0, 00:15:13.776 "state": "online", 00:15:13.776 "raid_level": "raid1", 00:15:13.776 "superblock": true, 00:15:13.776 "num_base_bdevs": 2, 00:15:13.776 "num_base_bdevs_discovered": 2, 00:15:13.776 "num_base_bdevs_operational": 2, 00:15:13.776 "process": { 00:15:13.776 "type": "rebuild", 00:15:13.776 "target": "spare", 00:15:13.776 "progress": { 00:15:13.776 "blocks": 2560, 00:15:13.776 "percent": 32 00:15:13.776 } 00:15:13.776 }, 00:15:13.776 "base_bdevs_list": [ 00:15:13.776 { 00:15:13.776 "name": "spare", 00:15:13.776 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:13.776 "is_configured": true, 00:15:13.776 "data_offset": 256, 00:15:13.776 "data_size": 7936 00:15:13.776 }, 00:15:13.776 { 00:15:13.776 "name": "BaseBdev2", 00:15:13.776 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:13.776 "is_configured": true, 00:15:13.776 "data_offset": 256, 00:15:13.776 "data_size": 7936 00:15:13.776 } 00:15:13.776 ] 00:15:13.776 }' 00:15:13.776 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:14.036 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=557 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.036 "name": "raid_bdev1", 00:15:14.036 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:14.036 "strip_size_kb": 0, 00:15:14.036 "state": "online", 00:15:14.036 "raid_level": "raid1", 00:15:14.036 "superblock": true, 00:15:14.036 "num_base_bdevs": 2, 00:15:14.036 "num_base_bdevs_discovered": 2, 00:15:14.036 "num_base_bdevs_operational": 2, 00:15:14.036 "process": { 00:15:14.036 "type": "rebuild", 00:15:14.036 "target": "spare", 00:15:14.036 "progress": { 00:15:14.036 "blocks": 2816, 00:15:14.036 "percent": 35 00:15:14.036 } 00:15:14.036 }, 00:15:14.036 "base_bdevs_list": [ 00:15:14.036 { 00:15:14.036 "name": "spare", 00:15:14.036 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:14.036 "is_configured": true, 00:15:14.036 "data_offset": 256, 00:15:14.036 "data_size": 7936 00:15:14.036 }, 00:15:14.036 { 00:15:14.036 "name": "BaseBdev2", 00:15:14.036 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:14.036 "is_configured": true, 00:15:14.036 "data_offset": 256, 00:15:14.036 "data_size": 7936 00:15:14.036 } 00:15:14.036 ] 00:15:14.036 }' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.036 01:59:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.418 "name": "raid_bdev1", 00:15:15.418 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:15.418 "strip_size_kb": 0, 00:15:15.418 "state": "online", 00:15:15.418 "raid_level": "raid1", 00:15:15.418 "superblock": true, 00:15:15.418 "num_base_bdevs": 2, 00:15:15.418 "num_base_bdevs_discovered": 2, 00:15:15.418 "num_base_bdevs_operational": 2, 00:15:15.418 "process": { 00:15:15.418 "type": "rebuild", 00:15:15.418 "target": "spare", 00:15:15.418 "progress": { 00:15:15.418 "blocks": 5888, 00:15:15.418 "percent": 74 00:15:15.418 } 00:15:15.418 }, 00:15:15.418 "base_bdevs_list": [ 00:15:15.418 { 00:15:15.418 "name": "spare", 00:15:15.418 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:15.418 "is_configured": true, 00:15:15.418 "data_offset": 256, 00:15:15.418 "data_size": 7936 00:15:15.418 }, 00:15:15.418 { 00:15:15.418 "name": "BaseBdev2", 00:15:15.418 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:15.418 "is_configured": true, 00:15:15.418 "data_offset": 256, 00:15:15.418 "data_size": 7936 00:15:15.418 } 00:15:15.418 ] 00:15:15.418 }' 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.418 01:59:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:15.987 [2024-12-07 01:59:21.274601] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:15.987 [2024-12-07 01:59:21.274832] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:15.987 [2024-12-07 01:59:21.275007] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.247 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.247 "name": "raid_bdev1", 00:15:16.247 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:16.247 "strip_size_kb": 0, 00:15:16.247 "state": "online", 00:15:16.247 "raid_level": "raid1", 00:15:16.247 "superblock": true, 00:15:16.247 "num_base_bdevs": 2, 00:15:16.247 "num_base_bdevs_discovered": 2, 00:15:16.247 "num_base_bdevs_operational": 2, 00:15:16.247 "base_bdevs_list": [ 00:15:16.247 { 00:15:16.247 "name": "spare", 00:15:16.247 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:16.247 "is_configured": true, 00:15:16.247 "data_offset": 256, 00:15:16.247 "data_size": 7936 00:15:16.247 }, 00:15:16.248 { 00:15:16.248 "name": "BaseBdev2", 00:15:16.248 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:16.248 "is_configured": true, 00:15:16.248 "data_offset": 256, 00:15:16.248 "data_size": 7936 00:15:16.248 } 00:15:16.248 ] 00:15:16.248 }' 00:15:16.248 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.508 "name": "raid_bdev1", 00:15:16.508 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:16.508 "strip_size_kb": 0, 00:15:16.508 "state": "online", 00:15:16.508 "raid_level": "raid1", 00:15:16.508 "superblock": true, 00:15:16.508 "num_base_bdevs": 2, 00:15:16.508 "num_base_bdevs_discovered": 2, 00:15:16.508 "num_base_bdevs_operational": 2, 00:15:16.508 "base_bdevs_list": [ 00:15:16.508 { 00:15:16.508 "name": "spare", 00:15:16.508 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:16.508 "is_configured": true, 00:15:16.508 "data_offset": 256, 00:15:16.508 "data_size": 7936 00:15:16.508 }, 00:15:16.508 { 00:15:16.508 "name": "BaseBdev2", 00:15:16.508 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:16.508 "is_configured": true, 00:15:16.508 "data_offset": 256, 00:15:16.508 "data_size": 7936 00:15:16.508 } 00:15:16.508 ] 00:15:16.508 }' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.508 "name": "raid_bdev1", 00:15:16.508 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:16.508 "strip_size_kb": 0, 00:15:16.508 "state": "online", 00:15:16.508 "raid_level": "raid1", 00:15:16.508 "superblock": true, 00:15:16.508 "num_base_bdevs": 2, 00:15:16.508 "num_base_bdevs_discovered": 2, 00:15:16.508 "num_base_bdevs_operational": 2, 00:15:16.508 "base_bdevs_list": [ 00:15:16.508 { 00:15:16.508 "name": "spare", 00:15:16.508 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:16.508 "is_configured": true, 00:15:16.508 "data_offset": 256, 00:15:16.508 "data_size": 7936 00:15:16.508 }, 00:15:16.508 { 00:15:16.508 "name": "BaseBdev2", 00:15:16.508 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:16.508 "is_configured": true, 00:15:16.508 "data_offset": 256, 00:15:16.508 "data_size": 7936 00:15:16.508 } 00:15:16.508 ] 00:15:16.508 }' 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.508 01:59:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.078 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.078 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.078 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.078 [2024-12-07 01:59:22.373543] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.078 [2024-12-07 01:59:22.373635] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.078 [2024-12-07 01:59:22.373802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.078 [2024-12-07 01:59:22.373904] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.078 [2024-12-07 01:59:22.373952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:17.078 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.079 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:17.339 /dev/nbd0 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.339 1+0 records in 00:15:17.339 1+0 records out 00:15:17.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247621 s, 16.5 MB/s 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.339 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:17.599 /dev/nbd1 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:17.599 1+0 records in 00:15:17.599 1+0 records out 00:15:17.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376749 s, 10.9 MB/s 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.599 01:59:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:17.858 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.117 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.117 [2024-12-07 01:59:23.416920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.117 [2024-12-07 01:59:23.417015] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.117 [2024-12-07 01:59:23.417040] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:18.117 [2024-12-07 01:59:23.417053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.117 [2024-12-07 01:59:23.419290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.117 [2024-12-07 01:59:23.419333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.117 [2024-12-07 01:59:23.419419] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:18.117 [2024-12-07 01:59:23.419456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.118 [2024-12-07 01:59:23.419563] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.118 spare 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.118 [2024-12-07 01:59:23.519478] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:18.118 [2024-12-07 01:59:23.519525] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:18.118 [2024-12-07 01:59:23.519882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:18.118 [2024-12-07 01:59:23.520067] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:18.118 [2024-12-07 01:59:23.520080] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:18.118 [2024-12-07 01:59:23.520228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.118 "name": "raid_bdev1", 00:15:18.118 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:18.118 "strip_size_kb": 0, 00:15:18.118 "state": "online", 00:15:18.118 "raid_level": "raid1", 00:15:18.118 "superblock": true, 00:15:18.118 "num_base_bdevs": 2, 00:15:18.118 "num_base_bdevs_discovered": 2, 00:15:18.118 "num_base_bdevs_operational": 2, 00:15:18.118 "base_bdevs_list": [ 00:15:18.118 { 00:15:18.118 "name": "spare", 00:15:18.118 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:18.118 "is_configured": true, 00:15:18.118 "data_offset": 256, 00:15:18.118 "data_size": 7936 00:15:18.118 }, 00:15:18.118 { 00:15:18.118 "name": "BaseBdev2", 00:15:18.118 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:18.118 "is_configured": true, 00:15:18.118 "data_offset": 256, 00:15:18.118 "data_size": 7936 00:15:18.118 } 00:15:18.118 ] 00:15:18.118 }' 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.118 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.686 "name": "raid_bdev1", 00:15:18.686 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:18.686 "strip_size_kb": 0, 00:15:18.686 "state": "online", 00:15:18.686 "raid_level": "raid1", 00:15:18.686 "superblock": true, 00:15:18.686 "num_base_bdevs": 2, 00:15:18.686 "num_base_bdevs_discovered": 2, 00:15:18.686 "num_base_bdevs_operational": 2, 00:15:18.686 "base_bdevs_list": [ 00:15:18.686 { 00:15:18.686 "name": "spare", 00:15:18.686 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:18.686 "is_configured": true, 00:15:18.686 "data_offset": 256, 00:15:18.686 "data_size": 7936 00:15:18.686 }, 00:15:18.686 { 00:15:18.686 "name": "BaseBdev2", 00:15:18.686 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:18.686 "is_configured": true, 00:15:18.686 "data_offset": 256, 00:15:18.686 "data_size": 7936 00:15:18.686 } 00:15:18.686 ] 00:15:18.686 }' 00:15:18.686 01:59:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.686 [2024-12-07 01:59:24.103830] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.686 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:18.687 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.946 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.946 "name": "raid_bdev1", 00:15:18.946 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:18.946 "strip_size_kb": 0, 00:15:18.946 "state": "online", 00:15:18.946 "raid_level": "raid1", 00:15:18.946 "superblock": true, 00:15:18.946 "num_base_bdevs": 2, 00:15:18.946 "num_base_bdevs_discovered": 1, 00:15:18.946 "num_base_bdevs_operational": 1, 00:15:18.946 "base_bdevs_list": [ 00:15:18.946 { 00:15:18.946 "name": null, 00:15:18.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.946 "is_configured": false, 00:15:18.946 "data_offset": 0, 00:15:18.946 "data_size": 7936 00:15:18.946 }, 00:15:18.946 { 00:15:18.946 "name": "BaseBdev2", 00:15:18.946 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:18.946 "is_configured": true, 00:15:18.946 "data_offset": 256, 00:15:18.946 "data_size": 7936 00:15:18.946 } 00:15:18.946 ] 00:15:18.946 }' 00:15:18.946 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.946 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.205 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:19.205 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.205 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:19.205 [2024-12-07 01:59:24.523093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.206 [2024-12-07 01:59:24.523369] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.206 [2024-12-07 01:59:24.523428] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:19.206 [2024-12-07 01:59:24.523525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.206 [2024-12-07 01:59:24.527491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:19.206 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.206 01:59:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:19.206 [2024-12-07 01:59:24.529516] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.165 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.165 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.165 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.166 "name": "raid_bdev1", 00:15:20.166 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:20.166 "strip_size_kb": 0, 00:15:20.166 "state": "online", 00:15:20.166 "raid_level": "raid1", 00:15:20.166 "superblock": true, 00:15:20.166 "num_base_bdevs": 2, 00:15:20.166 "num_base_bdevs_discovered": 2, 00:15:20.166 "num_base_bdevs_operational": 2, 00:15:20.166 "process": { 00:15:20.166 "type": "rebuild", 00:15:20.166 "target": "spare", 00:15:20.166 "progress": { 00:15:20.166 "blocks": 2560, 00:15:20.166 "percent": 32 00:15:20.166 } 00:15:20.166 }, 00:15:20.166 "base_bdevs_list": [ 00:15:20.166 { 00:15:20.166 "name": "spare", 00:15:20.166 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:20.166 "is_configured": true, 00:15:20.166 "data_offset": 256, 00:15:20.166 "data_size": 7936 00:15:20.166 }, 00:15:20.166 { 00:15:20.166 "name": "BaseBdev2", 00:15:20.166 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:20.166 "is_configured": true, 00:15:20.166 "data_offset": 256, 00:15:20.166 "data_size": 7936 00:15:20.166 } 00:15:20.166 ] 00:15:20.166 }' 00:15:20.166 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.426 [2024-12-07 01:59:25.690405] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.426 [2024-12-07 01:59:25.734260] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:20.426 [2024-12-07 01:59:25.734420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.426 [2024-12-07 01:59:25.734476] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:20.426 [2024-12-07 01:59:25.734498] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.426 "name": "raid_bdev1", 00:15:20.426 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:20.426 "strip_size_kb": 0, 00:15:20.426 "state": "online", 00:15:20.426 "raid_level": "raid1", 00:15:20.426 "superblock": true, 00:15:20.426 "num_base_bdevs": 2, 00:15:20.426 "num_base_bdevs_discovered": 1, 00:15:20.426 "num_base_bdevs_operational": 1, 00:15:20.426 "base_bdevs_list": [ 00:15:20.426 { 00:15:20.426 "name": null, 00:15:20.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.426 "is_configured": false, 00:15:20.426 "data_offset": 0, 00:15:20.426 "data_size": 7936 00:15:20.426 }, 00:15:20.426 { 00:15:20.426 "name": "BaseBdev2", 00:15:20.426 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:20.426 "is_configured": true, 00:15:20.426 "data_offset": 256, 00:15:20.426 "data_size": 7936 00:15:20.426 } 00:15:20.426 ] 00:15:20.426 }' 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.426 01:59:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.994 01:59:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:20.994 01:59:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.994 01:59:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:20.994 [2024-12-07 01:59:26.246020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:20.994 [2024-12-07 01:59:26.246087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.994 [2024-12-07 01:59:26.246113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:20.994 [2024-12-07 01:59:26.246121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.994 [2024-12-07 01:59:26.246568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.994 [2024-12-07 01:59:26.246598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:20.994 [2024-12-07 01:59:26.246708] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:20.994 [2024-12-07 01:59:26.246769] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.994 [2024-12-07 01:59:26.246799] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:20.994 [2024-12-07 01:59:26.246829] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:20.994 [2024-12-07 01:59:26.250798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:20.994 spare 00:15:20.994 01:59:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.994 01:59:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:20.994 [2024-12-07 01:59:26.252744] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.955 "name": "raid_bdev1", 00:15:21.955 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:21.955 "strip_size_kb": 0, 00:15:21.955 "state": "online", 00:15:21.955 "raid_level": "raid1", 00:15:21.955 "superblock": true, 00:15:21.955 "num_base_bdevs": 2, 00:15:21.955 "num_base_bdevs_discovered": 2, 00:15:21.955 "num_base_bdevs_operational": 2, 00:15:21.955 "process": { 00:15:21.955 "type": "rebuild", 00:15:21.955 "target": "spare", 00:15:21.955 "progress": { 00:15:21.955 "blocks": 2560, 00:15:21.955 "percent": 32 00:15:21.955 } 00:15:21.955 }, 00:15:21.955 "base_bdevs_list": [ 00:15:21.955 { 00:15:21.955 "name": "spare", 00:15:21.955 "uuid": "5d313357-716a-5a1d-9025-6fb7f07ba8ac", 00:15:21.955 "is_configured": true, 00:15:21.955 "data_offset": 256, 00:15:21.955 "data_size": 7936 00:15:21.955 }, 00:15:21.955 { 00:15:21.955 "name": "BaseBdev2", 00:15:21.955 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:21.955 "is_configured": true, 00:15:21.955 "data_offset": 256, 00:15:21.955 "data_size": 7936 00:15:21.955 } 00:15:21.955 ] 00:15:21.955 }' 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.955 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:21.955 [2024-12-07 01:59:27.409121] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.215 [2024-12-07 01:59:27.457475] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:22.215 [2024-12-07 01:59:27.457574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.215 [2024-12-07 01:59:27.457589] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:22.215 [2024-12-07 01:59:27.457599] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.215 "name": "raid_bdev1", 00:15:22.215 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:22.215 "strip_size_kb": 0, 00:15:22.215 "state": "online", 00:15:22.215 "raid_level": "raid1", 00:15:22.215 "superblock": true, 00:15:22.215 "num_base_bdevs": 2, 00:15:22.215 "num_base_bdevs_discovered": 1, 00:15:22.215 "num_base_bdevs_operational": 1, 00:15:22.215 "base_bdevs_list": [ 00:15:22.215 { 00:15:22.215 "name": null, 00:15:22.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.215 "is_configured": false, 00:15:22.215 "data_offset": 0, 00:15:22.215 "data_size": 7936 00:15:22.215 }, 00:15:22.215 { 00:15:22.215 "name": "BaseBdev2", 00:15:22.215 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:22.215 "is_configured": true, 00:15:22.215 "data_offset": 256, 00:15:22.215 "data_size": 7936 00:15:22.215 } 00:15:22.215 ] 00:15:22.215 }' 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.215 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.781 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.781 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.782 01:59:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.782 "name": "raid_bdev1", 00:15:22.782 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:22.782 "strip_size_kb": 0, 00:15:22.782 "state": "online", 00:15:22.782 "raid_level": "raid1", 00:15:22.782 "superblock": true, 00:15:22.782 "num_base_bdevs": 2, 00:15:22.782 "num_base_bdevs_discovered": 1, 00:15:22.782 "num_base_bdevs_operational": 1, 00:15:22.782 "base_bdevs_list": [ 00:15:22.782 { 00:15:22.782 "name": null, 00:15:22.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.782 "is_configured": false, 00:15:22.782 "data_offset": 0, 00:15:22.782 "data_size": 7936 00:15:22.782 }, 00:15:22.782 { 00:15:22.782 "name": "BaseBdev2", 00:15:22.782 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:22.782 "is_configured": true, 00:15:22.782 "data_offset": 256, 00:15:22.782 "data_size": 7936 00:15:22.782 } 00:15:22.782 ] 00:15:22.782 }' 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:22.782 [2024-12-07 01:59:28.152805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:22.782 [2024-12-07 01:59:28.152888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.782 [2024-12-07 01:59:28.152911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:22.782 [2024-12-07 01:59:28.152922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.782 [2024-12-07 01:59:28.153343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.782 [2024-12-07 01:59:28.153363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.782 [2024-12-07 01:59:28.153438] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:22.782 [2024-12-07 01:59:28.153457] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:22.782 [2024-12-07 01:59:28.153467] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:22.782 [2024-12-07 01:59:28.153478] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:22.782 BaseBdev1 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.782 01:59:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.718 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:23.977 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.977 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.977 "name": "raid_bdev1", 00:15:23.977 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:23.977 "strip_size_kb": 0, 00:15:23.977 "state": "online", 00:15:23.977 "raid_level": "raid1", 00:15:23.977 "superblock": true, 00:15:23.977 "num_base_bdevs": 2, 00:15:23.977 "num_base_bdevs_discovered": 1, 00:15:23.977 "num_base_bdevs_operational": 1, 00:15:23.977 "base_bdevs_list": [ 00:15:23.977 { 00:15:23.977 "name": null, 00:15:23.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.977 "is_configured": false, 00:15:23.977 "data_offset": 0, 00:15:23.977 "data_size": 7936 00:15:23.977 }, 00:15:23.977 { 00:15:23.977 "name": "BaseBdev2", 00:15:23.977 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:23.977 "is_configured": true, 00:15:23.977 "data_offset": 256, 00:15:23.977 "data_size": 7936 00:15:23.977 } 00:15:23.977 ] 00:15:23.977 }' 00:15:23.977 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.977 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.236 "name": "raid_bdev1", 00:15:24.236 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:24.236 "strip_size_kb": 0, 00:15:24.236 "state": "online", 00:15:24.236 "raid_level": "raid1", 00:15:24.236 "superblock": true, 00:15:24.236 "num_base_bdevs": 2, 00:15:24.236 "num_base_bdevs_discovered": 1, 00:15:24.236 "num_base_bdevs_operational": 1, 00:15:24.236 "base_bdevs_list": [ 00:15:24.236 { 00:15:24.236 "name": null, 00:15:24.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.236 "is_configured": false, 00:15:24.236 "data_offset": 0, 00:15:24.236 "data_size": 7936 00:15:24.236 }, 00:15:24.236 { 00:15:24.236 "name": "BaseBdev2", 00:15:24.236 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:24.236 "is_configured": true, 00:15:24.236 "data_offset": 256, 00:15:24.236 "data_size": 7936 00:15:24.236 } 00:15:24.236 ] 00:15:24.236 }' 00:15:24.236 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:24.495 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:24.496 [2024-12-07 01:59:29.778124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.496 [2024-12-07 01:59:29.778363] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:24.496 [2024-12-07 01:59:29.778429] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:24.496 request: 00:15:24.496 { 00:15:24.496 "base_bdev": "BaseBdev1", 00:15:24.496 "raid_bdev": "raid_bdev1", 00:15:24.496 "method": "bdev_raid_add_base_bdev", 00:15:24.496 "req_id": 1 00:15:24.496 } 00:15:24.496 Got JSON-RPC error response 00:15:24.496 response: 00:15:24.496 { 00:15:24.496 "code": -22, 00:15:24.496 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:24.496 } 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:24.496 01:59:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.433 "name": "raid_bdev1", 00:15:25.433 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:25.433 "strip_size_kb": 0, 00:15:25.433 "state": "online", 00:15:25.433 "raid_level": "raid1", 00:15:25.433 "superblock": true, 00:15:25.433 "num_base_bdevs": 2, 00:15:25.433 "num_base_bdevs_discovered": 1, 00:15:25.433 "num_base_bdevs_operational": 1, 00:15:25.433 "base_bdevs_list": [ 00:15:25.433 { 00:15:25.433 "name": null, 00:15:25.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.433 "is_configured": false, 00:15:25.433 "data_offset": 0, 00:15:25.433 "data_size": 7936 00:15:25.433 }, 00:15:25.433 { 00:15:25.433 "name": "BaseBdev2", 00:15:25.433 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:25.433 "is_configured": true, 00:15:25.433 "data_offset": 256, 00:15:25.433 "data_size": 7936 00:15:25.433 } 00:15:25.433 ] 00:15:25.433 }' 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.433 01:59:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.003 "name": "raid_bdev1", 00:15:26.003 "uuid": "e79765cb-7883-40d3-b259-a292c147b35e", 00:15:26.003 "strip_size_kb": 0, 00:15:26.003 "state": "online", 00:15:26.003 "raid_level": "raid1", 00:15:26.003 "superblock": true, 00:15:26.003 "num_base_bdevs": 2, 00:15:26.003 "num_base_bdevs_discovered": 1, 00:15:26.003 "num_base_bdevs_operational": 1, 00:15:26.003 "base_bdevs_list": [ 00:15:26.003 { 00:15:26.003 "name": null, 00:15:26.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.003 "is_configured": false, 00:15:26.003 "data_offset": 0, 00:15:26.003 "data_size": 7936 00:15:26.003 }, 00:15:26.003 { 00:15:26.003 "name": "BaseBdev2", 00:15:26.003 "uuid": "efcdefaa-1844-5180-a977-02aca9de3c1d", 00:15:26.003 "is_configured": true, 00:15:26.003 "data_offset": 256, 00:15:26.003 "data_size": 7936 00:15:26.003 } 00:15:26.003 ] 00:15:26.003 }' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96539 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 96539 ']' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 96539 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96539 00:15:26.003 killing process with pid 96539 00:15:26.003 Received shutdown signal, test time was about 60.000000 seconds 00:15:26.003 00:15:26.003 Latency(us) 00:15:26.003 [2024-12-07T01:59:31.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.003 [2024-12-07T01:59:31.465Z] =================================================================================================================== 00:15:26.003 [2024-12-07T01:59:31.465Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96539' 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 96539 00:15:26.003 [2024-12-07 01:59:31.405606] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:26.003 [2024-12-07 01:59:31.405750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.003 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 96539 00:15:26.003 [2024-12-07 01:59:31.405805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.003 [2024-12-07 01:59:31.405814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:26.003 [2024-12-07 01:59:31.437413] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.263 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:26.263 ************************************ 00:15:26.263 END TEST raid_rebuild_test_sb_4k 00:15:26.263 ************************************ 00:15:26.263 00:15:26.263 real 0m18.355s 00:15:26.263 user 0m24.546s 00:15:26.263 sys 0m2.474s 00:15:26.263 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.263 01:59:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:26.263 01:59:31 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:26.263 01:59:31 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:26.263 01:59:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:26.263 01:59:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.263 01:59:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 ************************************ 00:15:26.523 START TEST raid_state_function_test_sb_md_separate 00:15:26.523 ************************************ 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97212 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97212' 00:15:26.523 Process raid pid: 97212 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97212 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97212 ']' 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.523 01:59:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.523 [2024-12-07 01:59:31.824639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:26.523 [2024-12-07 01:59:31.824847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.523 [2024-12-07 01:59:31.964387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.783 [2024-12-07 01:59:32.013997] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.783 [2024-12-07 01:59:32.055079] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.783 [2024-12-07 01:59:32.055199] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.354 [2024-12-07 01:59:32.664160] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.354 [2024-12-07 01:59:32.664220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.354 [2024-12-07 01:59:32.664240] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.354 [2024-12-07 01:59:32.664250] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.354 "name": "Existed_Raid", 00:15:27.354 "uuid": "a5fe5cc0-8e7f-4e22-83cd-a080e7dfea62", 00:15:27.354 "strip_size_kb": 0, 00:15:27.354 "state": "configuring", 00:15:27.354 "raid_level": "raid1", 00:15:27.354 "superblock": true, 00:15:27.354 "num_base_bdevs": 2, 00:15:27.354 "num_base_bdevs_discovered": 0, 00:15:27.354 "num_base_bdevs_operational": 2, 00:15:27.354 "base_bdevs_list": [ 00:15:27.354 { 00:15:27.354 "name": "BaseBdev1", 00:15:27.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.354 "is_configured": false, 00:15:27.354 "data_offset": 0, 00:15:27.354 "data_size": 0 00:15:27.354 }, 00:15:27.354 { 00:15:27.354 "name": "BaseBdev2", 00:15:27.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.354 "is_configured": false, 00:15:27.354 "data_offset": 0, 00:15:27.354 "data_size": 0 00:15:27.354 } 00:15:27.354 ] 00:15:27.354 }' 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.354 01:59:32 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 [2024-12-07 01:59:33.155237] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.924 [2024-12-07 01:59:33.155342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 [2024-12-07 01:59:33.167212] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.924 [2024-12-07 01:59:33.167292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.924 [2024-12-07 01:59:33.167339] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.924 [2024-12-07 01:59:33.167362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 [2024-12-07 01:59:33.188524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.924 BaseBdev1 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.924 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.924 [ 00:15:27.924 { 00:15:27.924 "name": "BaseBdev1", 00:15:27.924 "aliases": [ 00:15:27.924 "1a8f3f13-8934-4745-836d-7d6cc6f7bbac" 00:15:27.924 ], 00:15:27.924 "product_name": "Malloc disk", 00:15:27.924 "block_size": 4096, 00:15:27.924 "num_blocks": 8192, 00:15:27.924 "uuid": "1a8f3f13-8934-4745-836d-7d6cc6f7bbac", 00:15:27.924 "md_size": 32, 00:15:27.924 "md_interleave": false, 00:15:27.924 "dif_type": 0, 00:15:27.924 "assigned_rate_limits": { 00:15:27.924 "rw_ios_per_sec": 0, 00:15:27.924 "rw_mbytes_per_sec": 0, 00:15:27.924 "r_mbytes_per_sec": 0, 00:15:27.924 "w_mbytes_per_sec": 0 00:15:27.924 }, 00:15:27.924 "claimed": true, 00:15:27.924 "claim_type": "exclusive_write", 00:15:27.924 "zoned": false, 00:15:27.924 "supported_io_types": { 00:15:27.924 "read": true, 00:15:27.924 "write": true, 00:15:27.924 "unmap": true, 00:15:27.924 "flush": true, 00:15:27.924 "reset": true, 00:15:27.924 "nvme_admin": false, 00:15:27.924 "nvme_io": false, 00:15:27.924 "nvme_io_md": false, 00:15:27.924 "write_zeroes": true, 00:15:27.925 "zcopy": true, 00:15:27.925 "get_zone_info": false, 00:15:27.925 "zone_management": false, 00:15:27.925 "zone_append": false, 00:15:27.925 "compare": false, 00:15:27.925 "compare_and_write": false, 00:15:27.925 "abort": true, 00:15:27.925 "seek_hole": false, 00:15:27.925 "seek_data": false, 00:15:27.925 "copy": true, 00:15:27.925 "nvme_iov_md": false 00:15:27.925 }, 00:15:27.925 "memory_domains": [ 00:15:27.925 { 00:15:27.925 "dma_device_id": "system", 00:15:27.925 "dma_device_type": 1 00:15:27.925 }, 00:15:27.925 { 00:15:27.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.925 "dma_device_type": 2 00:15:27.925 } 00:15:27.925 ], 00:15:27.925 "driver_specific": {} 00:15:27.925 } 00:15:27.925 ] 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.925 "name": "Existed_Raid", 00:15:27.925 "uuid": "3b0357f6-0767-4025-a6fc-16f537015cf9", 00:15:27.925 "strip_size_kb": 0, 00:15:27.925 "state": "configuring", 00:15:27.925 "raid_level": "raid1", 00:15:27.925 "superblock": true, 00:15:27.925 "num_base_bdevs": 2, 00:15:27.925 "num_base_bdevs_discovered": 1, 00:15:27.925 "num_base_bdevs_operational": 2, 00:15:27.925 "base_bdevs_list": [ 00:15:27.925 { 00:15:27.925 "name": "BaseBdev1", 00:15:27.925 "uuid": "1a8f3f13-8934-4745-836d-7d6cc6f7bbac", 00:15:27.925 "is_configured": true, 00:15:27.925 "data_offset": 256, 00:15:27.925 "data_size": 7936 00:15:27.925 }, 00:15:27.925 { 00:15:27.925 "name": "BaseBdev2", 00:15:27.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.925 "is_configured": false, 00:15:27.925 "data_offset": 0, 00:15:27.925 "data_size": 0 00:15:27.925 } 00:15:27.925 ] 00:15:27.925 }' 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.925 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.494 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:28.494 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.495 [2024-12-07 01:59:33.711726] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.495 [2024-12-07 01:59:33.711794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.495 [2024-12-07 01:59:33.723794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.495 [2024-12-07 01:59:33.725757] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.495 [2024-12-07 01:59:33.725792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.495 "name": "Existed_Raid", 00:15:28.495 "uuid": "cae4692c-2695-4333-b51f-27384a15d5f8", 00:15:28.495 "strip_size_kb": 0, 00:15:28.495 "state": "configuring", 00:15:28.495 "raid_level": "raid1", 00:15:28.495 "superblock": true, 00:15:28.495 "num_base_bdevs": 2, 00:15:28.495 "num_base_bdevs_discovered": 1, 00:15:28.495 "num_base_bdevs_operational": 2, 00:15:28.495 "base_bdevs_list": [ 00:15:28.495 { 00:15:28.495 "name": "BaseBdev1", 00:15:28.495 "uuid": "1a8f3f13-8934-4745-836d-7d6cc6f7bbac", 00:15:28.495 "is_configured": true, 00:15:28.495 "data_offset": 256, 00:15:28.495 "data_size": 7936 00:15:28.495 }, 00:15:28.495 { 00:15:28.495 "name": "BaseBdev2", 00:15:28.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.495 "is_configured": false, 00:15:28.495 "data_offset": 0, 00:15:28.495 "data_size": 0 00:15:28.495 } 00:15:28.495 ] 00:15:28.495 }' 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.495 01:59:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.755 [2024-12-07 01:59:34.174545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.755 [2024-12-07 01:59:34.174910] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:28.755 [2024-12-07 01:59:34.174971] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:28.755 [2024-12-07 01:59:34.175113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:28.755 [2024-12-07 01:59:34.175275] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:28.755 [2024-12-07 01:59:34.175330] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:28.755 [2024-12-07 01:59:34.175474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.755 BaseBdev2 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.755 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.755 [ 00:15:28.755 { 00:15:28.755 "name": "BaseBdev2", 00:15:28.755 "aliases": [ 00:15:28.755 "48cdad81-1f0f-4450-a815-d9769458120b" 00:15:28.755 ], 00:15:28.755 "product_name": "Malloc disk", 00:15:28.755 "block_size": 4096, 00:15:28.755 "num_blocks": 8192, 00:15:28.755 "uuid": "48cdad81-1f0f-4450-a815-d9769458120b", 00:15:28.755 "md_size": 32, 00:15:28.755 "md_interleave": false, 00:15:28.755 "dif_type": 0, 00:15:28.755 "assigned_rate_limits": { 00:15:28.755 "rw_ios_per_sec": 0, 00:15:28.755 "rw_mbytes_per_sec": 0, 00:15:28.755 "r_mbytes_per_sec": 0, 00:15:28.755 "w_mbytes_per_sec": 0 00:15:28.755 }, 00:15:28.755 "claimed": true, 00:15:28.755 "claim_type": "exclusive_write", 00:15:28.755 "zoned": false, 00:15:28.755 "supported_io_types": { 00:15:28.755 "read": true, 00:15:28.755 "write": true, 00:15:28.755 "unmap": true, 00:15:28.755 "flush": true, 00:15:28.755 "reset": true, 00:15:28.755 "nvme_admin": false, 00:15:28.755 "nvme_io": false, 00:15:28.755 "nvme_io_md": false, 00:15:28.755 "write_zeroes": true, 00:15:28.755 "zcopy": true, 00:15:28.755 "get_zone_info": false, 00:15:28.755 "zone_management": false, 00:15:28.755 "zone_append": false, 00:15:28.755 "compare": false, 00:15:28.755 "compare_and_write": false, 00:15:28.755 "abort": true, 00:15:28.755 "seek_hole": false, 00:15:28.755 "seek_data": false, 00:15:28.755 "copy": true, 00:15:28.755 "nvme_iov_md": false 00:15:28.755 }, 00:15:28.755 "memory_domains": [ 00:15:28.755 { 00:15:28.755 "dma_device_id": "system", 00:15:28.755 "dma_device_type": 1 00:15:28.755 }, 00:15:28.755 { 00:15:28.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.755 "dma_device_type": 2 00:15:28.755 } 00:15:28.755 ], 00:15:29.015 "driver_specific": {} 00:15:29.015 } 00:15:29.015 ] 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.015 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.016 "name": "Existed_Raid", 00:15:29.016 "uuid": "cae4692c-2695-4333-b51f-27384a15d5f8", 00:15:29.016 "strip_size_kb": 0, 00:15:29.016 "state": "online", 00:15:29.016 "raid_level": "raid1", 00:15:29.016 "superblock": true, 00:15:29.016 "num_base_bdevs": 2, 00:15:29.016 "num_base_bdevs_discovered": 2, 00:15:29.016 "num_base_bdevs_operational": 2, 00:15:29.016 "base_bdevs_list": [ 00:15:29.016 { 00:15:29.016 "name": "BaseBdev1", 00:15:29.016 "uuid": "1a8f3f13-8934-4745-836d-7d6cc6f7bbac", 00:15:29.016 "is_configured": true, 00:15:29.016 "data_offset": 256, 00:15:29.016 "data_size": 7936 00:15:29.016 }, 00:15:29.016 { 00:15:29.016 "name": "BaseBdev2", 00:15:29.016 "uuid": "48cdad81-1f0f-4450-a815-d9769458120b", 00:15:29.016 "is_configured": true, 00:15:29.016 "data_offset": 256, 00:15:29.016 "data_size": 7936 00:15:29.016 } 00:15:29.016 ] 00:15:29.016 }' 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.016 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.276 [2024-12-07 01:59:34.666053] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.276 "name": "Existed_Raid", 00:15:29.276 "aliases": [ 00:15:29.276 "cae4692c-2695-4333-b51f-27384a15d5f8" 00:15:29.276 ], 00:15:29.276 "product_name": "Raid Volume", 00:15:29.276 "block_size": 4096, 00:15:29.276 "num_blocks": 7936, 00:15:29.276 "uuid": "cae4692c-2695-4333-b51f-27384a15d5f8", 00:15:29.276 "md_size": 32, 00:15:29.276 "md_interleave": false, 00:15:29.276 "dif_type": 0, 00:15:29.276 "assigned_rate_limits": { 00:15:29.276 "rw_ios_per_sec": 0, 00:15:29.276 "rw_mbytes_per_sec": 0, 00:15:29.276 "r_mbytes_per_sec": 0, 00:15:29.276 "w_mbytes_per_sec": 0 00:15:29.276 }, 00:15:29.276 "claimed": false, 00:15:29.276 "zoned": false, 00:15:29.276 "supported_io_types": { 00:15:29.276 "read": true, 00:15:29.276 "write": true, 00:15:29.276 "unmap": false, 00:15:29.276 "flush": false, 00:15:29.276 "reset": true, 00:15:29.276 "nvme_admin": false, 00:15:29.276 "nvme_io": false, 00:15:29.276 "nvme_io_md": false, 00:15:29.276 "write_zeroes": true, 00:15:29.276 "zcopy": false, 00:15:29.276 "get_zone_info": false, 00:15:29.276 "zone_management": false, 00:15:29.276 "zone_append": false, 00:15:29.276 "compare": false, 00:15:29.276 "compare_and_write": false, 00:15:29.276 "abort": false, 00:15:29.276 "seek_hole": false, 00:15:29.276 "seek_data": false, 00:15:29.276 "copy": false, 00:15:29.276 "nvme_iov_md": false 00:15:29.276 }, 00:15:29.276 "memory_domains": [ 00:15:29.276 { 00:15:29.276 "dma_device_id": "system", 00:15:29.276 "dma_device_type": 1 00:15:29.276 }, 00:15:29.276 { 00:15:29.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.276 "dma_device_type": 2 00:15:29.276 }, 00:15:29.276 { 00:15:29.276 "dma_device_id": "system", 00:15:29.276 "dma_device_type": 1 00:15:29.276 }, 00:15:29.276 { 00:15:29.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.276 "dma_device_type": 2 00:15:29.276 } 00:15:29.276 ], 00:15:29.276 "driver_specific": { 00:15:29.276 "raid": { 00:15:29.276 "uuid": "cae4692c-2695-4333-b51f-27384a15d5f8", 00:15:29.276 "strip_size_kb": 0, 00:15:29.276 "state": "online", 00:15:29.276 "raid_level": "raid1", 00:15:29.276 "superblock": true, 00:15:29.276 "num_base_bdevs": 2, 00:15:29.276 "num_base_bdevs_discovered": 2, 00:15:29.276 "num_base_bdevs_operational": 2, 00:15:29.276 "base_bdevs_list": [ 00:15:29.276 { 00:15:29.276 "name": "BaseBdev1", 00:15:29.276 "uuid": "1a8f3f13-8934-4745-836d-7d6cc6f7bbac", 00:15:29.276 "is_configured": true, 00:15:29.276 "data_offset": 256, 00:15:29.276 "data_size": 7936 00:15:29.276 }, 00:15:29.276 { 00:15:29.276 "name": "BaseBdev2", 00:15:29.276 "uuid": "48cdad81-1f0f-4450-a815-d9769458120b", 00:15:29.276 "is_configured": true, 00:15:29.276 "data_offset": 256, 00:15:29.276 "data_size": 7936 00:15:29.276 } 00:15:29.276 ] 00:15:29.276 } 00:15:29.276 } 00:15:29.276 }' 00:15:29.276 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:29.537 BaseBdev2' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.537 [2024-12-07 01:59:34.913396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.537 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.538 "name": "Existed_Raid", 00:15:29.538 "uuid": "cae4692c-2695-4333-b51f-27384a15d5f8", 00:15:29.538 "strip_size_kb": 0, 00:15:29.538 "state": "online", 00:15:29.538 "raid_level": "raid1", 00:15:29.538 "superblock": true, 00:15:29.538 "num_base_bdevs": 2, 00:15:29.538 "num_base_bdevs_discovered": 1, 00:15:29.538 "num_base_bdevs_operational": 1, 00:15:29.538 "base_bdevs_list": [ 00:15:29.538 { 00:15:29.538 "name": null, 00:15:29.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.538 "is_configured": false, 00:15:29.538 "data_offset": 0, 00:15:29.538 "data_size": 7936 00:15:29.538 }, 00:15:29.538 { 00:15:29.538 "name": "BaseBdev2", 00:15:29.538 "uuid": "48cdad81-1f0f-4450-a815-d9769458120b", 00:15:29.538 "is_configured": true, 00:15:29.538 "data_offset": 256, 00:15:29.538 "data_size": 7936 00:15:29.538 } 00:15:29.538 ] 00:15:29.538 }' 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.538 01:59:34 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.115 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.115 [2024-12-07 01:59:35.420668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:30.115 [2024-12-07 01:59:35.420854] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.115 [2024-12-07 01:59:35.433117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.115 [2024-12-07 01:59:35.433239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.115 [2024-12-07 01:59:35.433280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97212 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97212 ']' 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97212 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97212 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.116 killing process with pid 97212 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97212' 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97212 00:15:30.116 [2024-12-07 01:59:35.519233] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.116 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97212 00:15:30.116 [2024-12-07 01:59:35.520232] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.375 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:30.375 00:15:30.375 real 0m4.027s 00:15:30.375 user 0m6.323s 00:15:30.375 sys 0m0.846s 00:15:30.375 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:30.375 01:59:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.375 ************************************ 00:15:30.375 END TEST raid_state_function_test_sb_md_separate 00:15:30.375 ************************************ 00:15:30.375 01:59:35 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:30.375 01:59:35 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:30.375 01:59:35 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:30.375 01:59:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.375 ************************************ 00:15:30.375 START TEST raid_superblock_test_md_separate 00:15:30.375 ************************************ 00:15:30.375 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:30.375 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:30.375 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:30.375 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97449 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97449 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97449 ']' 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.635 01:59:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.635 [2024-12-07 01:59:35.919521] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:30.635 [2024-12-07 01:59:35.919729] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97449 ] 00:15:30.635 [2024-12-07 01:59:36.064355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.894 [2024-12-07 01:59:36.112959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.894 [2024-12-07 01:59:36.154567] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.894 [2024-12-07 01:59:36.154705] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 malloc1 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 [2024-12-07 01:59:36.772984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:31.464 [2024-12-07 01:59:36.773087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.464 [2024-12-07 01:59:36.773118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:31.464 [2024-12-07 01:59:36.773132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.464 [2024-12-07 01:59:36.775107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.464 [2024-12-07 01:59:36.775152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:31.464 pt1 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 malloc2 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 [2024-12-07 01:59:36.813054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:31.464 [2024-12-07 01:59:36.813170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.464 [2024-12-07 01:59:36.813206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:31.464 [2024-12-07 01:59:36.813238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.464 [2024-12-07 01:59:36.815261] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.464 [2024-12-07 01:59:36.815337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:31.464 pt2 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.464 [2024-12-07 01:59:36.825082] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:31.464 [2024-12-07 01:59:36.827011] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:31.464 [2024-12-07 01:59:36.827237] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:31.464 [2024-12-07 01:59:36.827292] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:31.464 [2024-12-07 01:59:36.827391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:31.464 [2024-12-07 01:59:36.827520] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:31.464 [2024-12-07 01:59:36.827562] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:31.464 [2024-12-07 01:59:36.827702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.464 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.465 "name": "raid_bdev1", 00:15:31.465 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:31.465 "strip_size_kb": 0, 00:15:31.465 "state": "online", 00:15:31.465 "raid_level": "raid1", 00:15:31.465 "superblock": true, 00:15:31.465 "num_base_bdevs": 2, 00:15:31.465 "num_base_bdevs_discovered": 2, 00:15:31.465 "num_base_bdevs_operational": 2, 00:15:31.465 "base_bdevs_list": [ 00:15:31.465 { 00:15:31.465 "name": "pt1", 00:15:31.465 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:31.465 "is_configured": true, 00:15:31.465 "data_offset": 256, 00:15:31.465 "data_size": 7936 00:15:31.465 }, 00:15:31.465 { 00:15:31.465 "name": "pt2", 00:15:31.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:31.465 "is_configured": true, 00:15:31.465 "data_offset": 256, 00:15:31.465 "data_size": 7936 00:15:31.465 } 00:15:31.465 ] 00:15:31.465 }' 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.465 01:59:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.034 [2024-12-07 01:59:37.300555] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.034 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:32.034 "name": "raid_bdev1", 00:15:32.034 "aliases": [ 00:15:32.034 "f456e284-5e0e-4b06-a71e-388b9a460572" 00:15:32.034 ], 00:15:32.034 "product_name": "Raid Volume", 00:15:32.034 "block_size": 4096, 00:15:32.034 "num_blocks": 7936, 00:15:32.034 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:32.034 "md_size": 32, 00:15:32.034 "md_interleave": false, 00:15:32.034 "dif_type": 0, 00:15:32.034 "assigned_rate_limits": { 00:15:32.034 "rw_ios_per_sec": 0, 00:15:32.034 "rw_mbytes_per_sec": 0, 00:15:32.034 "r_mbytes_per_sec": 0, 00:15:32.034 "w_mbytes_per_sec": 0 00:15:32.034 }, 00:15:32.034 "claimed": false, 00:15:32.034 "zoned": false, 00:15:32.034 "supported_io_types": { 00:15:32.034 "read": true, 00:15:32.034 "write": true, 00:15:32.034 "unmap": false, 00:15:32.034 "flush": false, 00:15:32.034 "reset": true, 00:15:32.034 "nvme_admin": false, 00:15:32.034 "nvme_io": false, 00:15:32.034 "nvme_io_md": false, 00:15:32.034 "write_zeroes": true, 00:15:32.034 "zcopy": false, 00:15:32.034 "get_zone_info": false, 00:15:32.034 "zone_management": false, 00:15:32.034 "zone_append": false, 00:15:32.034 "compare": false, 00:15:32.034 "compare_and_write": false, 00:15:32.034 "abort": false, 00:15:32.034 "seek_hole": false, 00:15:32.034 "seek_data": false, 00:15:32.034 "copy": false, 00:15:32.034 "nvme_iov_md": false 00:15:32.034 }, 00:15:32.034 "memory_domains": [ 00:15:32.034 { 00:15:32.034 "dma_device_id": "system", 00:15:32.034 "dma_device_type": 1 00:15:32.034 }, 00:15:32.034 { 00:15:32.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.034 "dma_device_type": 2 00:15:32.034 }, 00:15:32.034 { 00:15:32.034 "dma_device_id": "system", 00:15:32.034 "dma_device_type": 1 00:15:32.034 }, 00:15:32.034 { 00:15:32.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.034 "dma_device_type": 2 00:15:32.034 } 00:15:32.034 ], 00:15:32.034 "driver_specific": { 00:15:32.034 "raid": { 00:15:32.034 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:32.034 "strip_size_kb": 0, 00:15:32.034 "state": "online", 00:15:32.034 "raid_level": "raid1", 00:15:32.034 "superblock": true, 00:15:32.034 "num_base_bdevs": 2, 00:15:32.034 "num_base_bdevs_discovered": 2, 00:15:32.034 "num_base_bdevs_operational": 2, 00:15:32.034 "base_bdevs_list": [ 00:15:32.034 { 00:15:32.034 "name": "pt1", 00:15:32.034 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.034 "is_configured": true, 00:15:32.034 "data_offset": 256, 00:15:32.034 "data_size": 7936 00:15:32.034 }, 00:15:32.034 { 00:15:32.034 "name": "pt2", 00:15:32.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.035 "is_configured": true, 00:15:32.035 "data_offset": 256, 00:15:32.035 "data_size": 7936 00:15:32.035 } 00:15:32.035 ] 00:15:32.035 } 00:15:32.035 } 00:15:32.035 }' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:32.035 pt2' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.035 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.295 [2024-12-07 01:59:37.508144] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f456e284-5e0e-4b06-a71e-388b9a460572 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z f456e284-5e0e-4b06-a71e-388b9a460572 ']' 00:15:32.295 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 [2024-12-07 01:59:37.535827] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.296 [2024-12-07 01:59:37.535854] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.296 [2024-12-07 01:59:37.535946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.296 [2024-12-07 01:59:37.536006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.296 [2024-12-07 01:59:37.536016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 [2024-12-07 01:59:37.659634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:32.296 [2024-12-07 01:59:37.661615] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:32.296 [2024-12-07 01:59:37.661734] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:32.296 [2024-12-07 01:59:37.661826] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:32.296 [2024-12-07 01:59:37.661883] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.296 [2024-12-07 01:59:37.661919] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:32.296 request: 00:15:32.296 { 00:15:32.296 "name": "raid_bdev1", 00:15:32.296 "raid_level": "raid1", 00:15:32.296 "base_bdevs": [ 00:15:32.296 "malloc1", 00:15:32.296 "malloc2" 00:15:32.296 ], 00:15:32.296 "superblock": false, 00:15:32.296 "method": "bdev_raid_create", 00:15:32.296 "req_id": 1 00:15:32.296 } 00:15:32.296 Got JSON-RPC error response 00:15:32.296 response: 00:15:32.296 { 00:15:32.296 "code": -17, 00:15:32.296 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:32.296 } 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 [2024-12-07 01:59:37.715490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:32.296 [2024-12-07 01:59:37.715617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.296 [2024-12-07 01:59:37.715659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:32.296 [2024-12-07 01:59:37.715702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.296 [2024-12-07 01:59:37.717703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.296 [2024-12-07 01:59:37.717768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:32.296 [2024-12-07 01:59:37.717863] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:32.296 pt1 00:15:32.296 [2024-12-07 01:59:37.717932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.296 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.556 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.556 "name": "raid_bdev1", 00:15:32.556 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:32.556 "strip_size_kb": 0, 00:15:32.556 "state": "configuring", 00:15:32.556 "raid_level": "raid1", 00:15:32.556 "superblock": true, 00:15:32.556 "num_base_bdevs": 2, 00:15:32.556 "num_base_bdevs_discovered": 1, 00:15:32.556 "num_base_bdevs_operational": 2, 00:15:32.556 "base_bdevs_list": [ 00:15:32.556 { 00:15:32.556 "name": "pt1", 00:15:32.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.556 "is_configured": true, 00:15:32.556 "data_offset": 256, 00:15:32.556 "data_size": 7936 00:15:32.556 }, 00:15:32.556 { 00:15:32.556 "name": null, 00:15:32.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.556 "is_configured": false, 00:15:32.556 "data_offset": 256, 00:15:32.556 "data_size": 7936 00:15:32.556 } 00:15:32.556 ] 00:15:32.556 }' 00:15:32.556 01:59:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.556 01:59:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.817 [2024-12-07 01:59:38.146757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:32.817 [2024-12-07 01:59:38.146887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:32.817 [2024-12-07 01:59:38.146925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:32.817 [2024-12-07 01:59:38.146953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:32.817 [2024-12-07 01:59:38.147194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:32.817 [2024-12-07 01:59:38.147243] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:32.817 [2024-12-07 01:59:38.147323] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:32.817 [2024-12-07 01:59:38.147378] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:32.817 [2024-12-07 01:59:38.147498] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:32.817 [2024-12-07 01:59:38.147533] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:32.817 [2024-12-07 01:59:38.147630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:32.817 [2024-12-07 01:59:38.147760] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:32.817 [2024-12-07 01:59:38.147804] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:32.817 [2024-12-07 01:59:38.147910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.817 pt2 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.817 "name": "raid_bdev1", 00:15:32.817 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:32.817 "strip_size_kb": 0, 00:15:32.817 "state": "online", 00:15:32.817 "raid_level": "raid1", 00:15:32.817 "superblock": true, 00:15:32.817 "num_base_bdevs": 2, 00:15:32.817 "num_base_bdevs_discovered": 2, 00:15:32.817 "num_base_bdevs_operational": 2, 00:15:32.817 "base_bdevs_list": [ 00:15:32.817 { 00:15:32.817 "name": "pt1", 00:15:32.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:32.817 "is_configured": true, 00:15:32.817 "data_offset": 256, 00:15:32.817 "data_size": 7936 00:15:32.817 }, 00:15:32.817 { 00:15:32.817 "name": "pt2", 00:15:32.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:32.817 "is_configured": true, 00:15:32.817 "data_offset": 256, 00:15:32.817 "data_size": 7936 00:15:32.817 } 00:15:32.817 ] 00:15:32.817 }' 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.817 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 [2024-12-07 01:59:38.662135] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.386 "name": "raid_bdev1", 00:15:33.386 "aliases": [ 00:15:33.386 "f456e284-5e0e-4b06-a71e-388b9a460572" 00:15:33.386 ], 00:15:33.386 "product_name": "Raid Volume", 00:15:33.386 "block_size": 4096, 00:15:33.386 "num_blocks": 7936, 00:15:33.386 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:33.386 "md_size": 32, 00:15:33.386 "md_interleave": false, 00:15:33.386 "dif_type": 0, 00:15:33.386 "assigned_rate_limits": { 00:15:33.386 "rw_ios_per_sec": 0, 00:15:33.386 "rw_mbytes_per_sec": 0, 00:15:33.386 "r_mbytes_per_sec": 0, 00:15:33.386 "w_mbytes_per_sec": 0 00:15:33.386 }, 00:15:33.386 "claimed": false, 00:15:33.386 "zoned": false, 00:15:33.386 "supported_io_types": { 00:15:33.386 "read": true, 00:15:33.386 "write": true, 00:15:33.386 "unmap": false, 00:15:33.386 "flush": false, 00:15:33.386 "reset": true, 00:15:33.386 "nvme_admin": false, 00:15:33.386 "nvme_io": false, 00:15:33.386 "nvme_io_md": false, 00:15:33.386 "write_zeroes": true, 00:15:33.386 "zcopy": false, 00:15:33.386 "get_zone_info": false, 00:15:33.386 "zone_management": false, 00:15:33.386 "zone_append": false, 00:15:33.386 "compare": false, 00:15:33.386 "compare_and_write": false, 00:15:33.386 "abort": false, 00:15:33.386 "seek_hole": false, 00:15:33.386 "seek_data": false, 00:15:33.386 "copy": false, 00:15:33.386 "nvme_iov_md": false 00:15:33.386 }, 00:15:33.386 "memory_domains": [ 00:15:33.386 { 00:15:33.386 "dma_device_id": "system", 00:15:33.386 "dma_device_type": 1 00:15:33.386 }, 00:15:33.386 { 00:15:33.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.386 "dma_device_type": 2 00:15:33.386 }, 00:15:33.386 { 00:15:33.386 "dma_device_id": "system", 00:15:33.386 "dma_device_type": 1 00:15:33.386 }, 00:15:33.386 { 00:15:33.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.386 "dma_device_type": 2 00:15:33.386 } 00:15:33.386 ], 00:15:33.386 "driver_specific": { 00:15:33.386 "raid": { 00:15:33.386 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:33.386 "strip_size_kb": 0, 00:15:33.386 "state": "online", 00:15:33.386 "raid_level": "raid1", 00:15:33.386 "superblock": true, 00:15:33.386 "num_base_bdevs": 2, 00:15:33.386 "num_base_bdevs_discovered": 2, 00:15:33.386 "num_base_bdevs_operational": 2, 00:15:33.386 "base_bdevs_list": [ 00:15:33.386 { 00:15:33.386 "name": "pt1", 00:15:33.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:33.386 "is_configured": true, 00:15:33.386 "data_offset": 256, 00:15:33.386 "data_size": 7936 00:15:33.386 }, 00:15:33.386 { 00:15:33.386 "name": "pt2", 00:15:33.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.386 "is_configured": true, 00:15:33.386 "data_offset": 256, 00:15:33.386 "data_size": 7936 00:15:33.386 } 00:15:33.386 ] 00:15:33.386 } 00:15:33.386 } 00:15:33.386 }' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:33.386 pt2' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.386 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.646 [2024-12-07 01:59:38.889761] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' f456e284-5e0e-4b06-a71e-388b9a460572 '!=' f456e284-5e0e-4b06-a71e-388b9a460572 ']' 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.646 [2024-12-07 01:59:38.933452] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.646 "name": "raid_bdev1", 00:15:33.646 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:33.646 "strip_size_kb": 0, 00:15:33.646 "state": "online", 00:15:33.646 "raid_level": "raid1", 00:15:33.646 "superblock": true, 00:15:33.646 "num_base_bdevs": 2, 00:15:33.646 "num_base_bdevs_discovered": 1, 00:15:33.646 "num_base_bdevs_operational": 1, 00:15:33.646 "base_bdevs_list": [ 00:15:33.646 { 00:15:33.646 "name": null, 00:15:33.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.646 "is_configured": false, 00:15:33.646 "data_offset": 0, 00:15:33.646 "data_size": 7936 00:15:33.646 }, 00:15:33.646 { 00:15:33.646 "name": "pt2", 00:15:33.646 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:33.646 "is_configured": true, 00:15:33.646 "data_offset": 256, 00:15:33.646 "data_size": 7936 00:15:33.646 } 00:15:33.646 ] 00:15:33.646 }' 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.646 01:59:38 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.217 [2024-12-07 01:59:39.384612] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.217 [2024-12-07 01:59:39.384736] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.217 [2024-12-07 01:59:39.384841] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.217 [2024-12-07 01:59:39.384895] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.217 [2024-12-07 01:59:39.384905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.217 [2024-12-07 01:59:39.460469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:34.217 [2024-12-07 01:59:39.460594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.217 [2024-12-07 01:59:39.460643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:34.217 [2024-12-07 01:59:39.460687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.217 [2024-12-07 01:59:39.462650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.217 [2024-12-07 01:59:39.462732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:34.217 [2024-12-07 01:59:39.462815] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:34.217 [2024-12-07 01:59:39.462874] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.217 [2024-12-07 01:59:39.463007] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:34.217 [2024-12-07 01:59:39.463042] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:34.217 [2024-12-07 01:59:39.463127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:34.217 [2024-12-07 01:59:39.463231] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:34.217 [2024-12-07 01:59:39.463268] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:34.217 [2024-12-07 01:59:39.463374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.217 pt2 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.217 "name": "raid_bdev1", 00:15:34.217 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:34.217 "strip_size_kb": 0, 00:15:34.217 "state": "online", 00:15:34.217 "raid_level": "raid1", 00:15:34.217 "superblock": true, 00:15:34.217 "num_base_bdevs": 2, 00:15:34.217 "num_base_bdevs_discovered": 1, 00:15:34.217 "num_base_bdevs_operational": 1, 00:15:34.217 "base_bdevs_list": [ 00:15:34.217 { 00:15:34.217 "name": null, 00:15:34.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.217 "is_configured": false, 00:15:34.217 "data_offset": 256, 00:15:34.217 "data_size": 7936 00:15:34.217 }, 00:15:34.217 { 00:15:34.217 "name": "pt2", 00:15:34.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.217 "is_configured": true, 00:15:34.217 "data_offset": 256, 00:15:34.217 "data_size": 7936 00:15:34.217 } 00:15:34.217 ] 00:15:34.217 }' 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.217 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.483 [2024-12-07 01:59:39.895725] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.483 [2024-12-07 01:59:39.895762] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.483 [2024-12-07 01:59:39.895849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.483 [2024-12-07 01:59:39.895898] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.483 [2024-12-07 01:59:39.895910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.483 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.743 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:34.743 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:34.743 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:34.743 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:34.743 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.743 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.743 [2024-12-07 01:59:39.955635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:34.743 [2024-12-07 01:59:39.955771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.743 [2024-12-07 01:59:39.955831] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:34.743 [2024-12-07 01:59:39.955871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.743 [2024-12-07 01:59:39.957971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.743 [2024-12-07 01:59:39.958049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:34.743 [2024-12-07 01:59:39.958126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:34.743 [2024-12-07 01:59:39.958181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:34.744 [2024-12-07 01:59:39.958330] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:34.744 [2024-12-07 01:59:39.958387] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:34.744 [2024-12-07 01:59:39.958469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:34.744 [2024-12-07 01:59:39.958536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:34.744 [2024-12-07 01:59:39.958611] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:34.744 [2024-12-07 01:59:39.958623] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:34.744 [2024-12-07 01:59:39.958700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:34.744 [2024-12-07 01:59:39.958786] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:34.744 [2024-12-07 01:59:39.958795] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:34.744 [2024-12-07 01:59:39.958877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.744 pt1 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.744 01:59:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.744 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.744 "name": "raid_bdev1", 00:15:34.744 "uuid": "f456e284-5e0e-4b06-a71e-388b9a460572", 00:15:34.744 "strip_size_kb": 0, 00:15:34.744 "state": "online", 00:15:34.744 "raid_level": "raid1", 00:15:34.744 "superblock": true, 00:15:34.744 "num_base_bdevs": 2, 00:15:34.744 "num_base_bdevs_discovered": 1, 00:15:34.744 "num_base_bdevs_operational": 1, 00:15:34.744 "base_bdevs_list": [ 00:15:34.744 { 00:15:34.744 "name": null, 00:15:34.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.744 "is_configured": false, 00:15:34.744 "data_offset": 256, 00:15:34.744 "data_size": 7936 00:15:34.744 }, 00:15:34.744 { 00:15:34.744 "name": "pt2", 00:15:34.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:34.744 "is_configured": true, 00:15:34.744 "data_offset": 256, 00:15:34.744 "data_size": 7936 00:15:34.744 } 00:15:34.744 ] 00:15:34.744 }' 00:15:34.744 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.744 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:35.004 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:35.263 [2024-12-07 01:59:40.471040] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' f456e284-5e0e-4b06-a71e-388b9a460572 '!=' f456e284-5e0e-4b06-a71e-388b9a460572 ']' 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97449 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97449 ']' 00:15:35.263 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 97449 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97449 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97449' 00:15:35.264 killing process with pid 97449 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 97449 00:15:35.264 [2024-12-07 01:59:40.549701] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.264 [2024-12-07 01:59:40.549807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.264 [2024-12-07 01:59:40.549858] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.264 [2024-12-07 01:59:40.549866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:35.264 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 97449 00:15:35.264 [2024-12-07 01:59:40.573845] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:35.524 01:59:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:35.524 00:15:35.524 real 0m4.980s 00:15:35.524 user 0m8.179s 00:15:35.524 sys 0m1.049s 00:15:35.524 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:35.524 01:59:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.524 ************************************ 00:15:35.524 END TEST raid_superblock_test_md_separate 00:15:35.524 ************************************ 00:15:35.524 01:59:40 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:35.524 01:59:40 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:35.524 01:59:40 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:35.524 01:59:40 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:35.524 01:59:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.524 ************************************ 00:15:35.524 START TEST raid_rebuild_test_sb_md_separate 00:15:35.524 ************************************ 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.524 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97766 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97766 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 97766 ']' 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.525 01:59:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.525 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:35.525 Zero copy mechanism will not be used. 00:15:35.525 [2024-12-07 01:59:40.979018] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:35.525 [2024-12-07 01:59:40.979125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97766 ] 00:15:35.785 [2024-12-07 01:59:41.123456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.785 [2024-12-07 01:59:41.172683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.785 [2024-12-07 01:59:41.213928] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.785 [2024-12-07 01:59:41.213977] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.355 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.355 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:15:36.355 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.355 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:36.355 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.355 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 BaseBdev1_malloc 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 [2024-12-07 01:59:41.836131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:36.615 [2024-12-07 01:59:41.836187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.615 [2024-12-07 01:59:41.836216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:36.615 [2024-12-07 01:59:41.836228] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.615 [2024-12-07 01:59:41.838200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.615 [2024-12-07 01:59:41.838237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:36.615 BaseBdev1 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 BaseBdev2_malloc 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 [2024-12-07 01:59:41.873808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:36.615 [2024-12-07 01:59:41.873864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.615 [2024-12-07 01:59:41.873888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:36.615 [2024-12-07 01:59:41.873897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.615 [2024-12-07 01:59:41.875968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.615 [2024-12-07 01:59:41.876060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:36.615 BaseBdev2 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 spare_malloc 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 spare_delay 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 [2024-12-07 01:59:41.914939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.615 [2024-12-07 01:59:41.915050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.615 [2024-12-07 01:59:41.915079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:36.615 [2024-12-07 01:59:41.915091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.615 [2024-12-07 01:59:41.917084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.615 [2024-12-07 01:59:41.917118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.615 spare 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.615 [2024-12-07 01:59:41.926974] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.615 [2024-12-07 01:59:41.928864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.615 [2024-12-07 01:59:41.929036] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:36.615 [2024-12-07 01:59:41.929049] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:36.615 [2024-12-07 01:59:41.929143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:36.615 [2024-12-07 01:59:41.929249] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:36.615 [2024-12-07 01:59:41.929260] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:36.615 [2024-12-07 01:59:41.929357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.615 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.616 "name": "raid_bdev1", 00:15:36.616 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:36.616 "strip_size_kb": 0, 00:15:36.616 "state": "online", 00:15:36.616 "raid_level": "raid1", 00:15:36.616 "superblock": true, 00:15:36.616 "num_base_bdevs": 2, 00:15:36.616 "num_base_bdevs_discovered": 2, 00:15:36.616 "num_base_bdevs_operational": 2, 00:15:36.616 "base_bdevs_list": [ 00:15:36.616 { 00:15:36.616 "name": "BaseBdev1", 00:15:36.616 "uuid": "4d19bdb6-4fb4-59d1-9ec9-807d28f00a20", 00:15:36.616 "is_configured": true, 00:15:36.616 "data_offset": 256, 00:15:36.616 "data_size": 7936 00:15:36.616 }, 00:15:36.616 { 00:15:36.616 "name": "BaseBdev2", 00:15:36.616 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:36.616 "is_configured": true, 00:15:36.616 "data_offset": 256, 00:15:36.616 "data_size": 7936 00:15:36.616 } 00:15:36.616 ] 00:15:36.616 }' 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.616 01:59:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.194 [2024-12-07 01:59:42.390418] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:37.194 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.195 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:37.459 [2024-12-07 01:59:42.669767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:37.459 /dev/nbd0 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.459 1+0 records in 00:15:37.459 1+0 records out 00:15:37.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256688 s, 16.0 MB/s 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:37.459 01:59:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:38.035 7936+0 records in 00:15:38.035 7936+0 records out 00:15:38.035 32505856 bytes (33 MB, 31 MiB) copied, 0.55581 s, 58.5 MB/s 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.036 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.295 [2024-12-07 01:59:43.505749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.295 [2024-12-07 01:59:43.523572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.295 "name": "raid_bdev1", 00:15:38.295 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:38.295 "strip_size_kb": 0, 00:15:38.295 "state": "online", 00:15:38.295 "raid_level": "raid1", 00:15:38.295 "superblock": true, 00:15:38.295 "num_base_bdevs": 2, 00:15:38.295 "num_base_bdevs_discovered": 1, 00:15:38.295 "num_base_bdevs_operational": 1, 00:15:38.295 "base_bdevs_list": [ 00:15:38.295 { 00:15:38.295 "name": null, 00:15:38.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.295 "is_configured": false, 00:15:38.295 "data_offset": 0, 00:15:38.295 "data_size": 7936 00:15:38.295 }, 00:15:38.295 { 00:15:38.295 "name": "BaseBdev2", 00:15:38.295 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:38.295 "is_configured": true, 00:15:38.295 "data_offset": 256, 00:15:38.295 "data_size": 7936 00:15:38.295 } 00:15:38.295 ] 00:15:38.295 }' 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.295 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.555 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.555 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.555 01:59:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.555 [2024-12-07 01:59:44.006728] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.555 [2024-12-07 01:59:44.008535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:38.555 [2024-12-07 01:59:44.010421] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.555 01:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.555 01:59:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.937 "name": "raid_bdev1", 00:15:39.937 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:39.937 "strip_size_kb": 0, 00:15:39.937 "state": "online", 00:15:39.937 "raid_level": "raid1", 00:15:39.937 "superblock": true, 00:15:39.937 "num_base_bdevs": 2, 00:15:39.937 "num_base_bdevs_discovered": 2, 00:15:39.937 "num_base_bdevs_operational": 2, 00:15:39.937 "process": { 00:15:39.937 "type": "rebuild", 00:15:39.937 "target": "spare", 00:15:39.937 "progress": { 00:15:39.937 "blocks": 2560, 00:15:39.937 "percent": 32 00:15:39.937 } 00:15:39.937 }, 00:15:39.937 "base_bdevs_list": [ 00:15:39.937 { 00:15:39.937 "name": "spare", 00:15:39.937 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:39.937 "is_configured": true, 00:15:39.937 "data_offset": 256, 00:15:39.937 "data_size": 7936 00:15:39.937 }, 00:15:39.937 { 00:15:39.937 "name": "BaseBdev2", 00:15:39.937 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:39.937 "is_configured": true, 00:15:39.937 "data_offset": 256, 00:15:39.937 "data_size": 7936 00:15:39.937 } 00:15:39.937 ] 00:15:39.937 }' 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.937 [2024-12-07 01:59:45.181812] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.937 [2024-12-07 01:59:45.216163] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.937 [2024-12-07 01:59:45.216234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.937 [2024-12-07 01:59:45.216253] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.937 [2024-12-07 01:59:45.216260] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.937 "name": "raid_bdev1", 00:15:39.937 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:39.937 "strip_size_kb": 0, 00:15:39.937 "state": "online", 00:15:39.937 "raid_level": "raid1", 00:15:39.937 "superblock": true, 00:15:39.937 "num_base_bdevs": 2, 00:15:39.937 "num_base_bdevs_discovered": 1, 00:15:39.937 "num_base_bdevs_operational": 1, 00:15:39.937 "base_bdevs_list": [ 00:15:39.937 { 00:15:39.937 "name": null, 00:15:39.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.937 "is_configured": false, 00:15:39.937 "data_offset": 0, 00:15:39.937 "data_size": 7936 00:15:39.937 }, 00:15:39.937 { 00:15:39.937 "name": "BaseBdev2", 00:15:39.937 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:39.937 "is_configured": true, 00:15:39.937 "data_offset": 256, 00:15:39.937 "data_size": 7936 00:15:39.937 } 00:15:39.937 ] 00:15:39.937 }' 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.937 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.504 "name": "raid_bdev1", 00:15:40.504 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:40.504 "strip_size_kb": 0, 00:15:40.504 "state": "online", 00:15:40.504 "raid_level": "raid1", 00:15:40.504 "superblock": true, 00:15:40.504 "num_base_bdevs": 2, 00:15:40.504 "num_base_bdevs_discovered": 1, 00:15:40.504 "num_base_bdevs_operational": 1, 00:15:40.504 "base_bdevs_list": [ 00:15:40.504 { 00:15:40.504 "name": null, 00:15:40.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.504 "is_configured": false, 00:15:40.504 "data_offset": 0, 00:15:40.504 "data_size": 7936 00:15:40.504 }, 00:15:40.504 { 00:15:40.504 "name": "BaseBdev2", 00:15:40.504 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:40.504 "is_configured": true, 00:15:40.504 "data_offset": 256, 00:15:40.504 "data_size": 7936 00:15:40.504 } 00:15:40.504 ] 00:15:40.504 }' 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.504 [2024-12-07 01:59:45.798546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.504 [2024-12-07 01:59:45.800422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:40.504 [2024-12-07 01:59:45.802346] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.504 01:59:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.440 "name": "raid_bdev1", 00:15:41.440 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:41.440 "strip_size_kb": 0, 00:15:41.440 "state": "online", 00:15:41.440 "raid_level": "raid1", 00:15:41.440 "superblock": true, 00:15:41.440 "num_base_bdevs": 2, 00:15:41.440 "num_base_bdevs_discovered": 2, 00:15:41.440 "num_base_bdevs_operational": 2, 00:15:41.440 "process": { 00:15:41.440 "type": "rebuild", 00:15:41.440 "target": "spare", 00:15:41.440 "progress": { 00:15:41.440 "blocks": 2560, 00:15:41.440 "percent": 32 00:15:41.440 } 00:15:41.440 }, 00:15:41.440 "base_bdevs_list": [ 00:15:41.440 { 00:15:41.440 "name": "spare", 00:15:41.440 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:41.440 "is_configured": true, 00:15:41.440 "data_offset": 256, 00:15:41.440 "data_size": 7936 00:15:41.440 }, 00:15:41.440 { 00:15:41.440 "name": "BaseBdev2", 00:15:41.440 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:41.440 "is_configured": true, 00:15:41.440 "data_offset": 256, 00:15:41.440 "data_size": 7936 00:15:41.440 } 00:15:41.440 ] 00:15:41.440 }' 00:15:41.440 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:41.699 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=584 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.699 "name": "raid_bdev1", 00:15:41.699 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:41.699 "strip_size_kb": 0, 00:15:41.699 "state": "online", 00:15:41.699 "raid_level": "raid1", 00:15:41.699 "superblock": true, 00:15:41.699 "num_base_bdevs": 2, 00:15:41.699 "num_base_bdevs_discovered": 2, 00:15:41.699 "num_base_bdevs_operational": 2, 00:15:41.699 "process": { 00:15:41.699 "type": "rebuild", 00:15:41.699 "target": "spare", 00:15:41.699 "progress": { 00:15:41.699 "blocks": 2816, 00:15:41.699 "percent": 35 00:15:41.699 } 00:15:41.699 }, 00:15:41.699 "base_bdevs_list": [ 00:15:41.699 { 00:15:41.699 "name": "spare", 00:15:41.699 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:41.699 "is_configured": true, 00:15:41.699 "data_offset": 256, 00:15:41.699 "data_size": 7936 00:15:41.699 }, 00:15:41.699 { 00:15:41.699 "name": "BaseBdev2", 00:15:41.699 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:41.699 "is_configured": true, 00:15:41.699 "data_offset": 256, 00:15:41.699 "data_size": 7936 00:15:41.699 } 00:15:41.699 ] 00:15:41.699 }' 00:15:41.699 01:59:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.699 01:59:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.699 01:59:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.699 01:59:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.699 01:59:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.079 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.079 "name": "raid_bdev1", 00:15:43.079 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:43.079 "strip_size_kb": 0, 00:15:43.079 "state": "online", 00:15:43.079 "raid_level": "raid1", 00:15:43.079 "superblock": true, 00:15:43.079 "num_base_bdevs": 2, 00:15:43.079 "num_base_bdevs_discovered": 2, 00:15:43.079 "num_base_bdevs_operational": 2, 00:15:43.079 "process": { 00:15:43.080 "type": "rebuild", 00:15:43.080 "target": "spare", 00:15:43.080 "progress": { 00:15:43.080 "blocks": 5888, 00:15:43.080 "percent": 74 00:15:43.080 } 00:15:43.080 }, 00:15:43.080 "base_bdevs_list": [ 00:15:43.080 { 00:15:43.080 "name": "spare", 00:15:43.080 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:43.080 "is_configured": true, 00:15:43.080 "data_offset": 256, 00:15:43.080 "data_size": 7936 00:15:43.080 }, 00:15:43.080 { 00:15:43.080 "name": "BaseBdev2", 00:15:43.080 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:43.080 "is_configured": true, 00:15:43.080 "data_offset": 256, 00:15:43.080 "data_size": 7936 00:15:43.080 } 00:15:43.080 ] 00:15:43.080 }' 00:15:43.080 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.080 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.080 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.080 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.080 01:59:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.649 [2024-12-07 01:59:48.915625] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:43.649 [2024-12-07 01:59:48.915750] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:43.649 [2024-12-07 01:59:48.915916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.907 "name": "raid_bdev1", 00:15:43.907 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:43.907 "strip_size_kb": 0, 00:15:43.907 "state": "online", 00:15:43.907 "raid_level": "raid1", 00:15:43.907 "superblock": true, 00:15:43.907 "num_base_bdevs": 2, 00:15:43.907 "num_base_bdevs_discovered": 2, 00:15:43.907 "num_base_bdevs_operational": 2, 00:15:43.907 "base_bdevs_list": [ 00:15:43.907 { 00:15:43.907 "name": "spare", 00:15:43.907 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:43.907 "is_configured": true, 00:15:43.907 "data_offset": 256, 00:15:43.907 "data_size": 7936 00:15:43.907 }, 00:15:43.907 { 00:15:43.907 "name": "BaseBdev2", 00:15:43.907 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:43.907 "is_configured": true, 00:15:43.907 "data_offset": 256, 00:15:43.907 "data_size": 7936 00:15:43.907 } 00:15:43.907 ] 00:15:43.907 }' 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:43.907 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.166 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.166 "name": "raid_bdev1", 00:15:44.166 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:44.166 "strip_size_kb": 0, 00:15:44.166 "state": "online", 00:15:44.166 "raid_level": "raid1", 00:15:44.166 "superblock": true, 00:15:44.166 "num_base_bdevs": 2, 00:15:44.166 "num_base_bdevs_discovered": 2, 00:15:44.166 "num_base_bdevs_operational": 2, 00:15:44.166 "base_bdevs_list": [ 00:15:44.166 { 00:15:44.166 "name": "spare", 00:15:44.167 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:44.167 "is_configured": true, 00:15:44.167 "data_offset": 256, 00:15:44.167 "data_size": 7936 00:15:44.167 }, 00:15:44.167 { 00:15:44.167 "name": "BaseBdev2", 00:15:44.167 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:44.167 "is_configured": true, 00:15:44.167 "data_offset": 256, 00:15:44.167 "data_size": 7936 00:15:44.167 } 00:15:44.167 ] 00:15:44.167 }' 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.167 "name": "raid_bdev1", 00:15:44.167 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:44.167 "strip_size_kb": 0, 00:15:44.167 "state": "online", 00:15:44.167 "raid_level": "raid1", 00:15:44.167 "superblock": true, 00:15:44.167 "num_base_bdevs": 2, 00:15:44.167 "num_base_bdevs_discovered": 2, 00:15:44.167 "num_base_bdevs_operational": 2, 00:15:44.167 "base_bdevs_list": [ 00:15:44.167 { 00:15:44.167 "name": "spare", 00:15:44.167 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:44.167 "is_configured": true, 00:15:44.167 "data_offset": 256, 00:15:44.167 "data_size": 7936 00:15:44.167 }, 00:15:44.167 { 00:15:44.167 "name": "BaseBdev2", 00:15:44.167 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:44.167 "is_configured": true, 00:15:44.167 "data_offset": 256, 00:15:44.167 "data_size": 7936 00:15:44.167 } 00:15:44.167 ] 00:15:44.167 }' 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.167 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.735 [2024-12-07 01:59:49.961512] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.735 [2024-12-07 01:59:49.961609] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.735 [2024-12-07 01:59:49.961727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.735 [2024-12-07 01:59:49.961819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.735 [2024-12-07 01:59:49.961838] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:44.735 01:59:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.735 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:44.994 /dev/nbd0 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.995 1+0 records in 00:15:44.995 1+0 records out 00:15:44.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478075 s, 8.6 MB/s 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.995 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:45.255 /dev/nbd1 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:45.255 1+0 records in 00:15:45.255 1+0 records out 00:15:45.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261111 s, 15.7 MB/s 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.255 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.515 01:59:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 [2024-12-07 01:59:51.043910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:45.774 [2024-12-07 01:59:51.043970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.774 [2024-12-07 01:59:51.043992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.774 [2024-12-07 01:59:51.044004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.774 [2024-12-07 01:59:51.046014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.774 [2024-12-07 01:59:51.046052] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:45.774 [2024-12-07 01:59:51.046116] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:45.774 [2024-12-07 01:59:51.046154] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:45.774 [2024-12-07 01:59:51.046260] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:45.774 spare 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.774 [2024-12-07 01:59:51.146163] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:45.774 [2024-12-07 01:59:51.146207] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:45.774 [2024-12-07 01:59:51.146369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:45.774 [2024-12-07 01:59:51.146507] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:45.774 [2024-12-07 01:59:51.146517] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:45.774 [2024-12-07 01:59:51.146631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:45.774 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.775 "name": "raid_bdev1", 00:15:45.775 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:45.775 "strip_size_kb": 0, 00:15:45.775 "state": "online", 00:15:45.775 "raid_level": "raid1", 00:15:45.775 "superblock": true, 00:15:45.775 "num_base_bdevs": 2, 00:15:45.775 "num_base_bdevs_discovered": 2, 00:15:45.775 "num_base_bdevs_operational": 2, 00:15:45.775 "base_bdevs_list": [ 00:15:45.775 { 00:15:45.775 "name": "spare", 00:15:45.775 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:45.775 "is_configured": true, 00:15:45.775 "data_offset": 256, 00:15:45.775 "data_size": 7936 00:15:45.775 }, 00:15:45.775 { 00:15:45.775 "name": "BaseBdev2", 00:15:45.775 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:45.775 "is_configured": true, 00:15:45.775 "data_offset": 256, 00:15:45.775 "data_size": 7936 00:15:45.775 } 00:15:45.775 ] 00:15:45.775 }' 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.775 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.343 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.343 "name": "raid_bdev1", 00:15:46.343 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:46.343 "strip_size_kb": 0, 00:15:46.343 "state": "online", 00:15:46.343 "raid_level": "raid1", 00:15:46.344 "superblock": true, 00:15:46.344 "num_base_bdevs": 2, 00:15:46.344 "num_base_bdevs_discovered": 2, 00:15:46.344 "num_base_bdevs_operational": 2, 00:15:46.344 "base_bdevs_list": [ 00:15:46.344 { 00:15:46.344 "name": "spare", 00:15:46.344 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:46.344 "is_configured": true, 00:15:46.344 "data_offset": 256, 00:15:46.344 "data_size": 7936 00:15:46.344 }, 00:15:46.344 { 00:15:46.344 "name": "BaseBdev2", 00:15:46.344 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:46.344 "is_configured": true, 00:15:46.344 "data_offset": 256, 00:15:46.344 "data_size": 7936 00:15:46.344 } 00:15:46.344 ] 00:15:46.344 }' 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.344 [2024-12-07 01:59:51.714865] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.344 "name": "raid_bdev1", 00:15:46.344 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:46.344 "strip_size_kb": 0, 00:15:46.344 "state": "online", 00:15:46.344 "raid_level": "raid1", 00:15:46.344 "superblock": true, 00:15:46.344 "num_base_bdevs": 2, 00:15:46.344 "num_base_bdevs_discovered": 1, 00:15:46.344 "num_base_bdevs_operational": 1, 00:15:46.344 "base_bdevs_list": [ 00:15:46.344 { 00:15:46.344 "name": null, 00:15:46.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.344 "is_configured": false, 00:15:46.344 "data_offset": 0, 00:15:46.344 "data_size": 7936 00:15:46.344 }, 00:15:46.344 { 00:15:46.344 "name": "BaseBdev2", 00:15:46.344 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:46.344 "is_configured": true, 00:15:46.344 "data_offset": 256, 00:15:46.344 "data_size": 7936 00:15:46.344 } 00:15:46.344 ] 00:15:46.344 }' 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.344 01:59:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.913 01:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.913 01:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.913 01:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:46.913 [2024-12-07 01:59:52.186066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.913 [2024-12-07 01:59:52.186351] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:46.913 [2024-12-07 01:59:52.186424] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:46.913 [2024-12-07 01:59:52.186537] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.913 [2024-12-07 01:59:52.188240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:46.913 [2024-12-07 01:59:52.190255] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:46.913 01:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.913 01:59:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.852 "name": "raid_bdev1", 00:15:47.852 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:47.852 "strip_size_kb": 0, 00:15:47.852 "state": "online", 00:15:47.852 "raid_level": "raid1", 00:15:47.852 "superblock": true, 00:15:47.852 "num_base_bdevs": 2, 00:15:47.852 "num_base_bdevs_discovered": 2, 00:15:47.852 "num_base_bdevs_operational": 2, 00:15:47.852 "process": { 00:15:47.852 "type": "rebuild", 00:15:47.852 "target": "spare", 00:15:47.852 "progress": { 00:15:47.852 "blocks": 2560, 00:15:47.852 "percent": 32 00:15:47.852 } 00:15:47.852 }, 00:15:47.852 "base_bdevs_list": [ 00:15:47.852 { 00:15:47.852 "name": "spare", 00:15:47.852 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:47.852 "is_configured": true, 00:15:47.852 "data_offset": 256, 00:15:47.852 "data_size": 7936 00:15:47.852 }, 00:15:47.852 { 00:15:47.852 "name": "BaseBdev2", 00:15:47.852 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:47.852 "is_configured": true, 00:15:47.852 "data_offset": 256, 00:15:47.852 "data_size": 7936 00:15:47.852 } 00:15:47.852 ] 00:15:47.852 }' 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.852 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.113 [2024-12-07 01:59:53.353311] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.113 [2024-12-07 01:59:53.395293] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:48.113 [2024-12-07 01:59:53.395415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.113 [2024-12-07 01:59:53.395437] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:48.113 [2024-12-07 01:59:53.395445] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.113 "name": "raid_bdev1", 00:15:48.113 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:48.113 "strip_size_kb": 0, 00:15:48.113 "state": "online", 00:15:48.113 "raid_level": "raid1", 00:15:48.113 "superblock": true, 00:15:48.113 "num_base_bdevs": 2, 00:15:48.113 "num_base_bdevs_discovered": 1, 00:15:48.113 "num_base_bdevs_operational": 1, 00:15:48.113 "base_bdevs_list": [ 00:15:48.113 { 00:15:48.113 "name": null, 00:15:48.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.113 "is_configured": false, 00:15:48.113 "data_offset": 0, 00:15:48.113 "data_size": 7936 00:15:48.113 }, 00:15:48.113 { 00:15:48.113 "name": "BaseBdev2", 00:15:48.113 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:48.113 "is_configured": true, 00:15:48.113 "data_offset": 256, 00:15:48.113 "data_size": 7936 00:15:48.113 } 00:15:48.113 ] 00:15:48.113 }' 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.113 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.373 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.373 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.373 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:48.373 [2024-12-07 01:59:53.797907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.373 [2024-12-07 01:59:53.798055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.373 [2024-12-07 01:59:53.798100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:48.373 [2024-12-07 01:59:53.798130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.373 [2024-12-07 01:59:53.798378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.373 [2024-12-07 01:59:53.798436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.373 [2024-12-07 01:59:53.798530] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:48.373 [2024-12-07 01:59:53.798566] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:48.373 [2024-12-07 01:59:53.798611] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:48.373 [2024-12-07 01:59:53.798700] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.373 [2024-12-07 01:59:53.800354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:48.373 [2024-12-07 01:59:53.802234] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.373 spare 00:15:48.373 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.373 01:59:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.755 "name": "raid_bdev1", 00:15:49.755 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:49.755 "strip_size_kb": 0, 00:15:49.755 "state": "online", 00:15:49.755 "raid_level": "raid1", 00:15:49.755 "superblock": true, 00:15:49.755 "num_base_bdevs": 2, 00:15:49.755 "num_base_bdevs_discovered": 2, 00:15:49.755 "num_base_bdevs_operational": 2, 00:15:49.755 "process": { 00:15:49.755 "type": "rebuild", 00:15:49.755 "target": "spare", 00:15:49.755 "progress": { 00:15:49.755 "blocks": 2560, 00:15:49.755 "percent": 32 00:15:49.755 } 00:15:49.755 }, 00:15:49.755 "base_bdevs_list": [ 00:15:49.755 { 00:15:49.755 "name": "spare", 00:15:49.755 "uuid": "00780b6a-fef9-5191-accf-b83d7eb0f297", 00:15:49.755 "is_configured": true, 00:15:49.755 "data_offset": 256, 00:15:49.755 "data_size": 7936 00:15:49.755 }, 00:15:49.755 { 00:15:49.755 "name": "BaseBdev2", 00:15:49.755 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:49.755 "is_configured": true, 00:15:49.755 "data_offset": 256, 00:15:49.755 "data_size": 7936 00:15:49.755 } 00:15:49.755 ] 00:15:49.755 }' 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.755 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.756 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.756 01:59:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.756 [2024-12-07 01:59:54.957245] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.756 [2024-12-07 01:59:55.007305] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:49.756 [2024-12-07 01:59:55.007495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.756 [2024-12-07 01:59:55.007533] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:49.756 [2024-12-07 01:59:55.007556] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.756 "name": "raid_bdev1", 00:15:49.756 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:49.756 "strip_size_kb": 0, 00:15:49.756 "state": "online", 00:15:49.756 "raid_level": "raid1", 00:15:49.756 "superblock": true, 00:15:49.756 "num_base_bdevs": 2, 00:15:49.756 "num_base_bdevs_discovered": 1, 00:15:49.756 "num_base_bdevs_operational": 1, 00:15:49.756 "base_bdevs_list": [ 00:15:49.756 { 00:15:49.756 "name": null, 00:15:49.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.756 "is_configured": false, 00:15:49.756 "data_offset": 0, 00:15:49.756 "data_size": 7936 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "name": "BaseBdev2", 00:15:49.756 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:49.756 "is_configured": true, 00:15:49.756 "data_offset": 256, 00:15:49.756 "data_size": 7936 00:15:49.756 } 00:15:49.756 ] 00:15:49.756 }' 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.756 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.327 "name": "raid_bdev1", 00:15:50.327 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:50.327 "strip_size_kb": 0, 00:15:50.327 "state": "online", 00:15:50.327 "raid_level": "raid1", 00:15:50.327 "superblock": true, 00:15:50.327 "num_base_bdevs": 2, 00:15:50.327 "num_base_bdevs_discovered": 1, 00:15:50.327 "num_base_bdevs_operational": 1, 00:15:50.327 "base_bdevs_list": [ 00:15:50.327 { 00:15:50.327 "name": null, 00:15:50.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.327 "is_configured": false, 00:15:50.327 "data_offset": 0, 00:15:50.327 "data_size": 7936 00:15:50.327 }, 00:15:50.327 { 00:15:50.327 "name": "BaseBdev2", 00:15:50.327 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:50.327 "is_configured": true, 00:15:50.327 "data_offset": 256, 00:15:50.327 "data_size": 7936 00:15:50.327 } 00:15:50.327 ] 00:15:50.327 }' 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:50.327 [2024-12-07 01:59:55.661545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.327 [2024-12-07 01:59:55.661610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.327 [2024-12-07 01:59:55.661630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:50.327 [2024-12-07 01:59:55.661641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.327 [2024-12-07 01:59:55.661869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.327 [2024-12-07 01:59:55.661887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.327 [2024-12-07 01:59:55.661940] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:50.327 [2024-12-07 01:59:55.661962] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:50.327 [2024-12-07 01:59:55.661970] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:50.327 [2024-12-07 01:59:55.661983] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:50.327 BaseBdev1 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.327 01:59:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.271 "name": "raid_bdev1", 00:15:51.271 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:51.271 "strip_size_kb": 0, 00:15:51.271 "state": "online", 00:15:51.271 "raid_level": "raid1", 00:15:51.271 "superblock": true, 00:15:51.271 "num_base_bdevs": 2, 00:15:51.271 "num_base_bdevs_discovered": 1, 00:15:51.271 "num_base_bdevs_operational": 1, 00:15:51.271 "base_bdevs_list": [ 00:15:51.271 { 00:15:51.271 "name": null, 00:15:51.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.271 "is_configured": false, 00:15:51.271 "data_offset": 0, 00:15:51.271 "data_size": 7936 00:15:51.271 }, 00:15:51.271 { 00:15:51.271 "name": "BaseBdev2", 00:15:51.271 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:51.271 "is_configured": true, 00:15:51.271 "data_offset": 256, 00:15:51.271 "data_size": 7936 00:15:51.271 } 00:15:51.271 ] 00:15:51.271 }' 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.271 01:59:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.840 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.840 "name": "raid_bdev1", 00:15:51.840 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:51.840 "strip_size_kb": 0, 00:15:51.840 "state": "online", 00:15:51.840 "raid_level": "raid1", 00:15:51.840 "superblock": true, 00:15:51.840 "num_base_bdevs": 2, 00:15:51.840 "num_base_bdevs_discovered": 1, 00:15:51.840 "num_base_bdevs_operational": 1, 00:15:51.840 "base_bdevs_list": [ 00:15:51.840 { 00:15:51.840 "name": null, 00:15:51.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.840 "is_configured": false, 00:15:51.840 "data_offset": 0, 00:15:51.840 "data_size": 7936 00:15:51.840 }, 00:15:51.840 { 00:15:51.840 "name": "BaseBdev2", 00:15:51.840 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:51.841 "is_configured": true, 00:15:51.841 "data_offset": 256, 00:15:51.841 "data_size": 7936 00:15:51.841 } 00:15:51.841 ] 00:15:51.841 }' 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:51.841 [2024-12-07 01:59:57.258845] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.841 [2024-12-07 01:59:57.259056] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.841 [2024-12-07 01:59:57.259110] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:51.841 request: 00:15:51.841 { 00:15:51.841 "base_bdev": "BaseBdev1", 00:15:51.841 "raid_bdev": "raid_bdev1", 00:15:51.841 "method": "bdev_raid_add_base_bdev", 00:15:51.841 "req_id": 1 00:15:51.841 } 00:15:51.841 Got JSON-RPC error response 00:15:51.841 response: 00:15:51.841 { 00:15:51.841 "code": -22, 00:15:51.841 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:51.841 } 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.841 01:59:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.220 "name": "raid_bdev1", 00:15:53.220 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:53.220 "strip_size_kb": 0, 00:15:53.220 "state": "online", 00:15:53.220 "raid_level": "raid1", 00:15:53.220 "superblock": true, 00:15:53.220 "num_base_bdevs": 2, 00:15:53.220 "num_base_bdevs_discovered": 1, 00:15:53.220 "num_base_bdevs_operational": 1, 00:15:53.220 "base_bdevs_list": [ 00:15:53.220 { 00:15:53.220 "name": null, 00:15:53.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.220 "is_configured": false, 00:15:53.220 "data_offset": 0, 00:15:53.220 "data_size": 7936 00:15:53.220 }, 00:15:53.220 { 00:15:53.220 "name": "BaseBdev2", 00:15:53.220 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:53.220 "is_configured": true, 00:15:53.220 "data_offset": 256, 00:15:53.220 "data_size": 7936 00:15:53.220 } 00:15:53.220 ] 00:15:53.220 }' 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.220 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.480 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:53.480 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.481 "name": "raid_bdev1", 00:15:53.481 "uuid": "9ee4b5a4-4c3b-4046-821c-525bfb054cb3", 00:15:53.481 "strip_size_kb": 0, 00:15:53.481 "state": "online", 00:15:53.481 "raid_level": "raid1", 00:15:53.481 "superblock": true, 00:15:53.481 "num_base_bdevs": 2, 00:15:53.481 "num_base_bdevs_discovered": 1, 00:15:53.481 "num_base_bdevs_operational": 1, 00:15:53.481 "base_bdevs_list": [ 00:15:53.481 { 00:15:53.481 "name": null, 00:15:53.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.481 "is_configured": false, 00:15:53.481 "data_offset": 0, 00:15:53.481 "data_size": 7936 00:15:53.481 }, 00:15:53.481 { 00:15:53.481 "name": "BaseBdev2", 00:15:53.481 "uuid": "39a1a711-471c-54e3-9343-eec327bde039", 00:15:53.481 "is_configured": true, 00:15:53.481 "data_offset": 256, 00:15:53.481 "data_size": 7936 00:15:53.481 } 00:15:53.481 ] 00:15:53.481 }' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97766 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 97766 ']' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 97766 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97766 00:15:53.481 killing process with pid 97766 00:15:53.481 Received shutdown signal, test time was about 60.000000 seconds 00:15:53.481 00:15:53.481 Latency(us) 00:15:53.481 [2024-12-07T01:59:58.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.481 [2024-12-07T01:59:58.943Z] =================================================================================================================== 00:15:53.481 [2024-12-07T01:59:58.943Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97766' 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 97766 00:15:53.481 [2024-12-07 01:59:58.883977] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.481 [2024-12-07 01:59:58.884118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.481 01:59:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 97766 00:15:53.481 [2024-12-07 01:59:58.884170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.481 [2024-12-07 01:59:58.884179] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:53.481 [2024-12-07 01:59:58.917713] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.741 01:59:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:53.741 00:15:53.741 real 0m18.255s 00:15:53.741 user 0m24.350s 00:15:53.741 sys 0m2.550s 00:15:53.741 01:59:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.741 01:59:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 ************************************ 00:15:53.741 END TEST raid_rebuild_test_sb_md_separate 00:15:53.741 ************************************ 00:15:53.741 01:59:59 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:53.741 01:59:59 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:53.741 01:59:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:53.741 01:59:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:53.741 01:59:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:54.001 ************************************ 00:15:54.001 START TEST raid_state_function_test_sb_md_interleaved 00:15:54.001 ************************************ 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:54.001 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:54.002 Process raid pid: 98440 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98440 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98440' 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98440 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98440 ']' 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.002 01:59:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.002 [2024-12-07 01:59:59.302671] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:54.002 [2024-12-07 01:59:59.302878] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.002 [2024-12-07 01:59:59.448880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.261 [2024-12-07 01:59:59.500212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.261 [2024-12-07 01:59:59.541897] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.261 [2024-12-07 01:59:59.541932] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.830 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:54.830 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.831 [2024-12-07 02:00:00.135496] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.831 [2024-12-07 02:00:00.135616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.831 [2024-12-07 02:00:00.135632] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:54.831 [2024-12-07 02:00:00.135644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.831 "name": "Existed_Raid", 00:15:54.831 "uuid": "b46db9fe-1a69-4e6b-9466-f20387cab2a7", 00:15:54.831 "strip_size_kb": 0, 00:15:54.831 "state": "configuring", 00:15:54.831 "raid_level": "raid1", 00:15:54.831 "superblock": true, 00:15:54.831 "num_base_bdevs": 2, 00:15:54.831 "num_base_bdevs_discovered": 0, 00:15:54.831 "num_base_bdevs_operational": 2, 00:15:54.831 "base_bdevs_list": [ 00:15:54.831 { 00:15:54.831 "name": "BaseBdev1", 00:15:54.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.831 "is_configured": false, 00:15:54.831 "data_offset": 0, 00:15:54.831 "data_size": 0 00:15:54.831 }, 00:15:54.831 { 00:15:54.831 "name": "BaseBdev2", 00:15:54.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.831 "is_configured": false, 00:15:54.831 "data_offset": 0, 00:15:54.831 "data_size": 0 00:15:54.831 } 00:15:54.831 ] 00:15:54.831 }' 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.831 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.091 [2024-12-07 02:00:00.482804] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.091 [2024-12-07 02:00:00.482910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.091 [2024-12-07 02:00:00.494778] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:55.091 [2024-12-07 02:00:00.494854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:55.091 [2024-12-07 02:00:00.494891] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.091 [2024-12-07 02:00:00.494914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.091 [2024-12-07 02:00:00.515452] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.091 BaseBdev1 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.091 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.091 [ 00:15:55.091 { 00:15:55.091 "name": "BaseBdev1", 00:15:55.091 "aliases": [ 00:15:55.091 "a16a20cb-868a-44f8-8027-1387fbc68428" 00:15:55.091 ], 00:15:55.091 "product_name": "Malloc disk", 00:15:55.091 "block_size": 4128, 00:15:55.091 "num_blocks": 8192, 00:15:55.091 "uuid": "a16a20cb-868a-44f8-8027-1387fbc68428", 00:15:55.091 "md_size": 32, 00:15:55.091 "md_interleave": true, 00:15:55.091 "dif_type": 0, 00:15:55.091 "assigned_rate_limits": { 00:15:55.091 "rw_ios_per_sec": 0, 00:15:55.091 "rw_mbytes_per_sec": 0, 00:15:55.091 "r_mbytes_per_sec": 0, 00:15:55.091 "w_mbytes_per_sec": 0 00:15:55.091 }, 00:15:55.091 "claimed": true, 00:15:55.091 "claim_type": "exclusive_write", 00:15:55.091 "zoned": false, 00:15:55.091 "supported_io_types": { 00:15:55.091 "read": true, 00:15:55.091 "write": true, 00:15:55.091 "unmap": true, 00:15:55.091 "flush": true, 00:15:55.091 "reset": true, 00:15:55.091 "nvme_admin": false, 00:15:55.091 "nvme_io": false, 00:15:55.091 "nvme_io_md": false, 00:15:55.091 "write_zeroes": true, 00:15:55.091 "zcopy": true, 00:15:55.091 "get_zone_info": false, 00:15:55.091 "zone_management": false, 00:15:55.091 "zone_append": false, 00:15:55.091 "compare": false, 00:15:55.091 "compare_and_write": false, 00:15:55.091 "abort": true, 00:15:55.091 "seek_hole": false, 00:15:55.091 "seek_data": false, 00:15:55.091 "copy": true, 00:15:55.091 "nvme_iov_md": false 00:15:55.351 }, 00:15:55.351 "memory_domains": [ 00:15:55.351 { 00:15:55.351 "dma_device_id": "system", 00:15:55.351 "dma_device_type": 1 00:15:55.351 }, 00:15:55.351 { 00:15:55.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.351 "dma_device_type": 2 00:15:55.351 } 00:15:55.351 ], 00:15:55.351 "driver_specific": {} 00:15:55.351 } 00:15:55.351 ] 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.351 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.352 "name": "Existed_Raid", 00:15:55.352 "uuid": "729adcee-f509-4f41-ae97-2177837f8fe2", 00:15:55.352 "strip_size_kb": 0, 00:15:55.352 "state": "configuring", 00:15:55.352 "raid_level": "raid1", 00:15:55.352 "superblock": true, 00:15:55.352 "num_base_bdevs": 2, 00:15:55.352 "num_base_bdevs_discovered": 1, 00:15:55.352 "num_base_bdevs_operational": 2, 00:15:55.352 "base_bdevs_list": [ 00:15:55.352 { 00:15:55.352 "name": "BaseBdev1", 00:15:55.352 "uuid": "a16a20cb-868a-44f8-8027-1387fbc68428", 00:15:55.352 "is_configured": true, 00:15:55.352 "data_offset": 256, 00:15:55.352 "data_size": 7936 00:15:55.352 }, 00:15:55.352 { 00:15:55.352 "name": "BaseBdev2", 00:15:55.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.352 "is_configured": false, 00:15:55.352 "data_offset": 0, 00:15:55.352 "data_size": 0 00:15:55.352 } 00:15:55.352 ] 00:15:55.352 }' 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.352 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.612 [2024-12-07 02:00:00.982763] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:55.612 [2024-12-07 02:00:00.982832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.612 [2024-12-07 02:00:00.994768] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.612 [2024-12-07 02:00:00.996615] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:55.612 [2024-12-07 02:00:00.996673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:55.612 02:00:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.612 "name": "Existed_Raid", 00:15:55.612 "uuid": "1c825d17-8534-423f-97d4-c1c4eb060ee7", 00:15:55.612 "strip_size_kb": 0, 00:15:55.612 "state": "configuring", 00:15:55.612 "raid_level": "raid1", 00:15:55.612 "superblock": true, 00:15:55.612 "num_base_bdevs": 2, 00:15:55.612 "num_base_bdevs_discovered": 1, 00:15:55.612 "num_base_bdevs_operational": 2, 00:15:55.612 "base_bdevs_list": [ 00:15:55.612 { 00:15:55.612 "name": "BaseBdev1", 00:15:55.612 "uuid": "a16a20cb-868a-44f8-8027-1387fbc68428", 00:15:55.612 "is_configured": true, 00:15:55.612 "data_offset": 256, 00:15:55.612 "data_size": 7936 00:15:55.612 }, 00:15:55.612 { 00:15:55.612 "name": "BaseBdev2", 00:15:55.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.612 "is_configured": false, 00:15:55.612 "data_offset": 0, 00:15:55.612 "data_size": 0 00:15:55.612 } 00:15:55.612 ] 00:15:55.612 }' 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.612 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.181 [2024-12-07 02:00:01.474643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:56.181 [2024-12-07 02:00:01.474957] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:56.181 [2024-12-07 02:00:01.475003] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:56.181 [2024-12-07 02:00:01.475157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:56.181 [2024-12-07 02:00:01.475289] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:56.181 [2024-12-07 02:00:01.475341] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:56.181 [2024-12-07 02:00:01.475458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.181 BaseBdev2 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:56.181 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.182 [ 00:15:56.182 { 00:15:56.182 "name": "BaseBdev2", 00:15:56.182 "aliases": [ 00:15:56.182 "be985794-d1b8-433c-85b3-ac7194024f05" 00:15:56.182 ], 00:15:56.182 "product_name": "Malloc disk", 00:15:56.182 "block_size": 4128, 00:15:56.182 "num_blocks": 8192, 00:15:56.182 "uuid": "be985794-d1b8-433c-85b3-ac7194024f05", 00:15:56.182 "md_size": 32, 00:15:56.182 "md_interleave": true, 00:15:56.182 "dif_type": 0, 00:15:56.182 "assigned_rate_limits": { 00:15:56.182 "rw_ios_per_sec": 0, 00:15:56.182 "rw_mbytes_per_sec": 0, 00:15:56.182 "r_mbytes_per_sec": 0, 00:15:56.182 "w_mbytes_per_sec": 0 00:15:56.182 }, 00:15:56.182 "claimed": true, 00:15:56.182 "claim_type": "exclusive_write", 00:15:56.182 "zoned": false, 00:15:56.182 "supported_io_types": { 00:15:56.182 "read": true, 00:15:56.182 "write": true, 00:15:56.182 "unmap": true, 00:15:56.182 "flush": true, 00:15:56.182 "reset": true, 00:15:56.182 "nvme_admin": false, 00:15:56.182 "nvme_io": false, 00:15:56.182 "nvme_io_md": false, 00:15:56.182 "write_zeroes": true, 00:15:56.182 "zcopy": true, 00:15:56.182 "get_zone_info": false, 00:15:56.182 "zone_management": false, 00:15:56.182 "zone_append": false, 00:15:56.182 "compare": false, 00:15:56.182 "compare_and_write": false, 00:15:56.182 "abort": true, 00:15:56.182 "seek_hole": false, 00:15:56.182 "seek_data": false, 00:15:56.182 "copy": true, 00:15:56.182 "nvme_iov_md": false 00:15:56.182 }, 00:15:56.182 "memory_domains": [ 00:15:56.182 { 00:15:56.182 "dma_device_id": "system", 00:15:56.182 "dma_device_type": 1 00:15:56.182 }, 00:15:56.182 { 00:15:56.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.182 "dma_device_type": 2 00:15:56.182 } 00:15:56.182 ], 00:15:56.182 "driver_specific": {} 00:15:56.182 } 00:15:56.182 ] 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.182 "name": "Existed_Raid", 00:15:56.182 "uuid": "1c825d17-8534-423f-97d4-c1c4eb060ee7", 00:15:56.182 "strip_size_kb": 0, 00:15:56.182 "state": "online", 00:15:56.182 "raid_level": "raid1", 00:15:56.182 "superblock": true, 00:15:56.182 "num_base_bdevs": 2, 00:15:56.182 "num_base_bdevs_discovered": 2, 00:15:56.182 "num_base_bdevs_operational": 2, 00:15:56.182 "base_bdevs_list": [ 00:15:56.182 { 00:15:56.182 "name": "BaseBdev1", 00:15:56.182 "uuid": "a16a20cb-868a-44f8-8027-1387fbc68428", 00:15:56.182 "is_configured": true, 00:15:56.182 "data_offset": 256, 00:15:56.182 "data_size": 7936 00:15:56.182 }, 00:15:56.182 { 00:15:56.182 "name": "BaseBdev2", 00:15:56.182 "uuid": "be985794-d1b8-433c-85b3-ac7194024f05", 00:15:56.182 "is_configured": true, 00:15:56.182 "data_offset": 256, 00:15:56.182 "data_size": 7936 00:15:56.182 } 00:15:56.182 ] 00:15:56.182 }' 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.182 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.441 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.441 [2024-12-07 02:00:01.894279] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.702 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.702 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:56.702 "name": "Existed_Raid", 00:15:56.702 "aliases": [ 00:15:56.702 "1c825d17-8534-423f-97d4-c1c4eb060ee7" 00:15:56.702 ], 00:15:56.702 "product_name": "Raid Volume", 00:15:56.702 "block_size": 4128, 00:15:56.702 "num_blocks": 7936, 00:15:56.702 "uuid": "1c825d17-8534-423f-97d4-c1c4eb060ee7", 00:15:56.702 "md_size": 32, 00:15:56.702 "md_interleave": true, 00:15:56.702 "dif_type": 0, 00:15:56.702 "assigned_rate_limits": { 00:15:56.702 "rw_ios_per_sec": 0, 00:15:56.702 "rw_mbytes_per_sec": 0, 00:15:56.702 "r_mbytes_per_sec": 0, 00:15:56.702 "w_mbytes_per_sec": 0 00:15:56.702 }, 00:15:56.702 "claimed": false, 00:15:56.702 "zoned": false, 00:15:56.702 "supported_io_types": { 00:15:56.702 "read": true, 00:15:56.702 "write": true, 00:15:56.702 "unmap": false, 00:15:56.702 "flush": false, 00:15:56.702 "reset": true, 00:15:56.702 "nvme_admin": false, 00:15:56.702 "nvme_io": false, 00:15:56.702 "nvme_io_md": false, 00:15:56.702 "write_zeroes": true, 00:15:56.702 "zcopy": false, 00:15:56.702 "get_zone_info": false, 00:15:56.702 "zone_management": false, 00:15:56.702 "zone_append": false, 00:15:56.702 "compare": false, 00:15:56.702 "compare_and_write": false, 00:15:56.702 "abort": false, 00:15:56.702 "seek_hole": false, 00:15:56.702 "seek_data": false, 00:15:56.702 "copy": false, 00:15:56.702 "nvme_iov_md": false 00:15:56.702 }, 00:15:56.702 "memory_domains": [ 00:15:56.702 { 00:15:56.702 "dma_device_id": "system", 00:15:56.702 "dma_device_type": 1 00:15:56.702 }, 00:15:56.702 { 00:15:56.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.702 "dma_device_type": 2 00:15:56.702 }, 00:15:56.702 { 00:15:56.702 "dma_device_id": "system", 00:15:56.702 "dma_device_type": 1 00:15:56.702 }, 00:15:56.702 { 00:15:56.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.702 "dma_device_type": 2 00:15:56.702 } 00:15:56.702 ], 00:15:56.702 "driver_specific": { 00:15:56.702 "raid": { 00:15:56.702 "uuid": "1c825d17-8534-423f-97d4-c1c4eb060ee7", 00:15:56.702 "strip_size_kb": 0, 00:15:56.702 "state": "online", 00:15:56.702 "raid_level": "raid1", 00:15:56.702 "superblock": true, 00:15:56.702 "num_base_bdevs": 2, 00:15:56.702 "num_base_bdevs_discovered": 2, 00:15:56.702 "num_base_bdevs_operational": 2, 00:15:56.702 "base_bdevs_list": [ 00:15:56.702 { 00:15:56.702 "name": "BaseBdev1", 00:15:56.702 "uuid": "a16a20cb-868a-44f8-8027-1387fbc68428", 00:15:56.702 "is_configured": true, 00:15:56.702 "data_offset": 256, 00:15:56.702 "data_size": 7936 00:15:56.702 }, 00:15:56.702 { 00:15:56.702 "name": "BaseBdev2", 00:15:56.702 "uuid": "be985794-d1b8-433c-85b3-ac7194024f05", 00:15:56.702 "is_configured": true, 00:15:56.702 "data_offset": 256, 00:15:56.702 "data_size": 7936 00:15:56.702 } 00:15:56.702 ] 00:15:56.702 } 00:15:56.702 } 00:15:56.702 }' 00:15:56.702 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.702 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:56.702 BaseBdev2' 00:15:56.702 02:00:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.702 [2024-12-07 02:00:02.133645] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:56.702 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.703 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.962 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.962 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.962 "name": "Existed_Raid", 00:15:56.962 "uuid": "1c825d17-8534-423f-97d4-c1c4eb060ee7", 00:15:56.962 "strip_size_kb": 0, 00:15:56.962 "state": "online", 00:15:56.962 "raid_level": "raid1", 00:15:56.962 "superblock": true, 00:15:56.962 "num_base_bdevs": 2, 00:15:56.962 "num_base_bdevs_discovered": 1, 00:15:56.962 "num_base_bdevs_operational": 1, 00:15:56.962 "base_bdevs_list": [ 00:15:56.962 { 00:15:56.962 "name": null, 00:15:56.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.962 "is_configured": false, 00:15:56.962 "data_offset": 0, 00:15:56.962 "data_size": 7936 00:15:56.962 }, 00:15:56.962 { 00:15:56.963 "name": "BaseBdev2", 00:15:56.963 "uuid": "be985794-d1b8-433c-85b3-ac7194024f05", 00:15:56.963 "is_configured": true, 00:15:56.963 "data_offset": 256, 00:15:56.963 "data_size": 7936 00:15:56.963 } 00:15:56.963 ] 00:15:56.963 }' 00:15:56.963 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.963 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.222 [2024-12-07 02:00:02.656378] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:57.222 [2024-12-07 02:00:02.656539] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.222 [2024-12-07 02:00:02.668441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.222 [2024-12-07 02:00:02.668493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.222 [2024-12-07 02:00:02.668505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.222 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98440 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98440 ']' 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98440 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98440 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:57.481 killing process with pid 98440 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98440' 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98440 00:15:57.481 [2024-12-07 02:00:02.754626] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.481 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98440 00:15:57.481 [2024-12-07 02:00:02.755608] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.740 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:57.740 00:15:57.740 real 0m3.781s 00:15:57.740 user 0m5.884s 00:15:57.740 sys 0m0.812s 00:15:57.740 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.740 02:00:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.740 ************************************ 00:15:57.740 END TEST raid_state_function_test_sb_md_interleaved 00:15:57.740 ************************************ 00:15:57.740 02:00:03 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:57.740 02:00:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:15:57.740 02:00:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.740 02:00:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.740 ************************************ 00:15:57.740 START TEST raid_superblock_test_md_interleaved 00:15:57.740 ************************************ 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:57.740 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98676 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98676 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98676 ']' 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.741 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.741 [2024-12-07 02:00:03.150117] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:57.741 [2024-12-07 02:00:03.150226] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98676 ] 00:15:57.999 [2024-12-07 02:00:03.292584] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.999 [2024-12-07 02:00:03.341922] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.000 [2024-12-07 02:00:03.383210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.000 [2024-12-07 02:00:03.383329] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.568 02:00:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.568 malloc1 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.568 [2024-12-07 02:00:04.016801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:58.568 [2024-12-07 02:00:04.016899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.568 [2024-12-07 02:00:04.016946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:58.568 [2024-12-07 02:00:04.016977] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.568 [2024-12-07 02:00:04.018901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.568 [2024-12-07 02:00:04.018970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:58.568 pt1 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.568 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 malloc2 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 [2024-12-07 02:00:04.057149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:58.828 [2024-12-07 02:00:04.057203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.828 [2024-12-07 02:00:04.057219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:58.828 [2024-12-07 02:00:04.057230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.828 [2024-12-07 02:00:04.059104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.828 [2024-12-07 02:00:04.059199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:58.828 pt2 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 [2024-12-07 02:00:04.069179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:58.828 [2024-12-07 02:00:04.070991] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:58.828 [2024-12-07 02:00:04.071171] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:58.828 [2024-12-07 02:00:04.071188] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:58.828 [2024-12-07 02:00:04.071273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:58.828 [2024-12-07 02:00:04.071329] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:58.828 [2024-12-07 02:00:04.071340] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:58.828 [2024-12-07 02:00:04.071412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.828 "name": "raid_bdev1", 00:15:58.828 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:15:58.828 "strip_size_kb": 0, 00:15:58.828 "state": "online", 00:15:58.828 "raid_level": "raid1", 00:15:58.828 "superblock": true, 00:15:58.828 "num_base_bdevs": 2, 00:15:58.828 "num_base_bdevs_discovered": 2, 00:15:58.828 "num_base_bdevs_operational": 2, 00:15:58.828 "base_bdevs_list": [ 00:15:58.828 { 00:15:58.828 "name": "pt1", 00:15:58.828 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.828 "is_configured": true, 00:15:58.828 "data_offset": 256, 00:15:58.828 "data_size": 7936 00:15:58.828 }, 00:15:58.828 { 00:15:58.828 "name": "pt2", 00:15:58.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.828 "is_configured": true, 00:15:58.828 "data_offset": 256, 00:15:58.828 "data_size": 7936 00:15:58.828 } 00:15:58.828 ] 00:15:58.828 }' 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.828 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.086 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.086 [2024-12-07 02:00:04.540646] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:59.345 "name": "raid_bdev1", 00:15:59.345 "aliases": [ 00:15:59.345 "bc3e4582-8c0c-4ac5-959d-d04b4c938754" 00:15:59.345 ], 00:15:59.345 "product_name": "Raid Volume", 00:15:59.345 "block_size": 4128, 00:15:59.345 "num_blocks": 7936, 00:15:59.345 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:15:59.345 "md_size": 32, 00:15:59.345 "md_interleave": true, 00:15:59.345 "dif_type": 0, 00:15:59.345 "assigned_rate_limits": { 00:15:59.345 "rw_ios_per_sec": 0, 00:15:59.345 "rw_mbytes_per_sec": 0, 00:15:59.345 "r_mbytes_per_sec": 0, 00:15:59.345 "w_mbytes_per_sec": 0 00:15:59.345 }, 00:15:59.345 "claimed": false, 00:15:59.345 "zoned": false, 00:15:59.345 "supported_io_types": { 00:15:59.345 "read": true, 00:15:59.345 "write": true, 00:15:59.345 "unmap": false, 00:15:59.345 "flush": false, 00:15:59.345 "reset": true, 00:15:59.345 "nvme_admin": false, 00:15:59.345 "nvme_io": false, 00:15:59.345 "nvme_io_md": false, 00:15:59.345 "write_zeroes": true, 00:15:59.345 "zcopy": false, 00:15:59.345 "get_zone_info": false, 00:15:59.345 "zone_management": false, 00:15:59.345 "zone_append": false, 00:15:59.345 "compare": false, 00:15:59.345 "compare_and_write": false, 00:15:59.345 "abort": false, 00:15:59.345 "seek_hole": false, 00:15:59.345 "seek_data": false, 00:15:59.345 "copy": false, 00:15:59.345 "nvme_iov_md": false 00:15:59.345 }, 00:15:59.345 "memory_domains": [ 00:15:59.345 { 00:15:59.345 "dma_device_id": "system", 00:15:59.345 "dma_device_type": 1 00:15:59.345 }, 00:15:59.345 { 00:15:59.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.345 "dma_device_type": 2 00:15:59.345 }, 00:15:59.345 { 00:15:59.345 "dma_device_id": "system", 00:15:59.345 "dma_device_type": 1 00:15:59.345 }, 00:15:59.345 { 00:15:59.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.345 "dma_device_type": 2 00:15:59.345 } 00:15:59.345 ], 00:15:59.345 "driver_specific": { 00:15:59.345 "raid": { 00:15:59.345 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:15:59.345 "strip_size_kb": 0, 00:15:59.345 "state": "online", 00:15:59.345 "raid_level": "raid1", 00:15:59.345 "superblock": true, 00:15:59.345 "num_base_bdevs": 2, 00:15:59.345 "num_base_bdevs_discovered": 2, 00:15:59.345 "num_base_bdevs_operational": 2, 00:15:59.345 "base_bdevs_list": [ 00:15:59.345 { 00:15:59.345 "name": "pt1", 00:15:59.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.345 "is_configured": true, 00:15:59.345 "data_offset": 256, 00:15:59.345 "data_size": 7936 00:15:59.345 }, 00:15:59.345 { 00:15:59.345 "name": "pt2", 00:15:59.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.345 "is_configured": true, 00:15:59.345 "data_offset": 256, 00:15:59.345 "data_size": 7936 00:15:59.345 } 00:15:59.345 ] 00:15:59.345 } 00:15:59.345 } 00:15:59.345 }' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:59.345 pt2' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.345 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.345 [2024-12-07 02:00:04.788164] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc3e4582-8c0c-4ac5-959d-d04b4c938754 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z bc3e4582-8c0c-4ac5-959d-d04b4c938754 ']' 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.604 [2024-12-07 02:00:04.815859] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.604 [2024-12-07 02:00:04.815887] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:59.604 [2024-12-07 02:00:04.815972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.604 [2024-12-07 02:00:04.816046] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.604 [2024-12-07 02:00:04.816057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.604 [2024-12-07 02:00:04.959658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.604 [2024-12-07 02:00:04.961672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.604 [2024-12-07 02:00:04.961764] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.604 [2024-12-07 02:00:04.961827] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.604 [2024-12-07 02:00:04.961849] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.604 [2024-12-07 02:00:04.961859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:59.604 request: 00:15:59.604 { 00:15:59.604 "name": "raid_bdev1", 00:15:59.604 "raid_level": "raid1", 00:15:59.604 "base_bdevs": [ 00:15:59.604 "malloc1", 00:15:59.604 "malloc2" 00:15:59.604 ], 00:15:59.604 "superblock": false, 00:15:59.604 "method": "bdev_raid_create", 00:15:59.604 "req_id": 1 00:15:59.604 } 00:15:59.604 Got JSON-RPC error response 00:15:59.604 response: 00:15:59.604 { 00:15:59.604 "code": -17, 00:15:59.604 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.604 } 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.604 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:59.605 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.605 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 02:00:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 [2024-12-07 02:00:05.027502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.605 [2024-12-07 02:00:05.027670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.605 [2024-12-07 02:00:05.027711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:59.605 [2024-12-07 02:00:05.027742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.605 [2024-12-07 02:00:05.029738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.605 [2024-12-07 02:00:05.029811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.605 [2024-12-07 02:00:05.029939] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.605 [2024-12-07 02:00:05.029996] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.605 pt1 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.605 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.863 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.863 "name": "raid_bdev1", 00:15:59.863 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:15:59.863 "strip_size_kb": 0, 00:15:59.863 "state": "configuring", 00:15:59.863 "raid_level": "raid1", 00:15:59.863 "superblock": true, 00:15:59.863 "num_base_bdevs": 2, 00:15:59.863 "num_base_bdevs_discovered": 1, 00:15:59.863 "num_base_bdevs_operational": 2, 00:15:59.863 "base_bdevs_list": [ 00:15:59.863 { 00:15:59.863 "name": "pt1", 00:15:59.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:59.863 "is_configured": true, 00:15:59.863 "data_offset": 256, 00:15:59.863 "data_size": 7936 00:15:59.863 }, 00:15:59.863 { 00:15:59.863 "name": null, 00:15:59.863 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.863 "is_configured": false, 00:15:59.863 "data_offset": 256, 00:15:59.863 "data_size": 7936 00:15:59.863 } 00:15:59.863 ] 00:15:59.863 }' 00:15:59.863 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.863 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.122 [2024-12-07 02:00:05.438801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.122 [2024-12-07 02:00:05.438864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.122 [2024-12-07 02:00:05.438888] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:00.122 [2024-12-07 02:00:05.438897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.122 [2024-12-07 02:00:05.439080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.122 [2024-12-07 02:00:05.439093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.122 [2024-12-07 02:00:05.439145] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.122 [2024-12-07 02:00:05.439173] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.122 [2024-12-07 02:00:05.439263] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:16:00.122 [2024-12-07 02:00:05.439293] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:00.122 [2024-12-07 02:00:05.439371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:00.122 [2024-12-07 02:00:05.439430] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:16:00.122 [2024-12-07 02:00:05.439444] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:16:00.122 [2024-12-07 02:00:05.439505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.122 pt2 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.122 "name": "raid_bdev1", 00:16:00.122 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:16:00.122 "strip_size_kb": 0, 00:16:00.122 "state": "online", 00:16:00.122 "raid_level": "raid1", 00:16:00.122 "superblock": true, 00:16:00.122 "num_base_bdevs": 2, 00:16:00.122 "num_base_bdevs_discovered": 2, 00:16:00.122 "num_base_bdevs_operational": 2, 00:16:00.122 "base_bdevs_list": [ 00:16:00.122 { 00:16:00.122 "name": "pt1", 00:16:00.122 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.122 "is_configured": true, 00:16:00.122 "data_offset": 256, 00:16:00.122 "data_size": 7936 00:16:00.122 }, 00:16:00.122 { 00:16:00.122 "name": "pt2", 00:16:00.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.122 "is_configured": true, 00:16:00.122 "data_offset": 256, 00:16:00.122 "data_size": 7936 00:16:00.122 } 00:16:00.122 ] 00:16:00.122 }' 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.122 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.690 [2024-12-07 02:00:05.922236] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:00.690 "name": "raid_bdev1", 00:16:00.690 "aliases": [ 00:16:00.690 "bc3e4582-8c0c-4ac5-959d-d04b4c938754" 00:16:00.690 ], 00:16:00.690 "product_name": "Raid Volume", 00:16:00.690 "block_size": 4128, 00:16:00.690 "num_blocks": 7936, 00:16:00.690 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:16:00.690 "md_size": 32, 00:16:00.690 "md_interleave": true, 00:16:00.690 "dif_type": 0, 00:16:00.690 "assigned_rate_limits": { 00:16:00.690 "rw_ios_per_sec": 0, 00:16:00.690 "rw_mbytes_per_sec": 0, 00:16:00.690 "r_mbytes_per_sec": 0, 00:16:00.690 "w_mbytes_per_sec": 0 00:16:00.690 }, 00:16:00.690 "claimed": false, 00:16:00.690 "zoned": false, 00:16:00.690 "supported_io_types": { 00:16:00.690 "read": true, 00:16:00.690 "write": true, 00:16:00.690 "unmap": false, 00:16:00.690 "flush": false, 00:16:00.690 "reset": true, 00:16:00.690 "nvme_admin": false, 00:16:00.690 "nvme_io": false, 00:16:00.690 "nvme_io_md": false, 00:16:00.690 "write_zeroes": true, 00:16:00.690 "zcopy": false, 00:16:00.690 "get_zone_info": false, 00:16:00.690 "zone_management": false, 00:16:00.690 "zone_append": false, 00:16:00.690 "compare": false, 00:16:00.690 "compare_and_write": false, 00:16:00.690 "abort": false, 00:16:00.690 "seek_hole": false, 00:16:00.690 "seek_data": false, 00:16:00.690 "copy": false, 00:16:00.690 "nvme_iov_md": false 00:16:00.690 }, 00:16:00.690 "memory_domains": [ 00:16:00.690 { 00:16:00.690 "dma_device_id": "system", 00:16:00.690 "dma_device_type": 1 00:16:00.690 }, 00:16:00.690 { 00:16:00.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.690 "dma_device_type": 2 00:16:00.690 }, 00:16:00.690 { 00:16:00.690 "dma_device_id": "system", 00:16:00.690 "dma_device_type": 1 00:16:00.690 }, 00:16:00.690 { 00:16:00.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.690 "dma_device_type": 2 00:16:00.690 } 00:16:00.690 ], 00:16:00.690 "driver_specific": { 00:16:00.690 "raid": { 00:16:00.690 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:16:00.690 "strip_size_kb": 0, 00:16:00.690 "state": "online", 00:16:00.690 "raid_level": "raid1", 00:16:00.690 "superblock": true, 00:16:00.690 "num_base_bdevs": 2, 00:16:00.690 "num_base_bdevs_discovered": 2, 00:16:00.690 "num_base_bdevs_operational": 2, 00:16:00.690 "base_bdevs_list": [ 00:16:00.690 { 00:16:00.690 "name": "pt1", 00:16:00.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:00.690 "is_configured": true, 00:16:00.690 "data_offset": 256, 00:16:00.690 "data_size": 7936 00:16:00.690 }, 00:16:00.690 { 00:16:00.690 "name": "pt2", 00:16:00.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.690 "is_configured": true, 00:16:00.690 "data_offset": 256, 00:16:00.690 "data_size": 7936 00:16:00.690 } 00:16:00.690 ] 00:16:00.690 } 00:16:00.690 } 00:16:00.690 }' 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:00.690 02:00:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:00.690 pt2' 00:16:00.690 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.690 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:16:00.690 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.690 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:00.690 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.690 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.691 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.949 [2024-12-07 02:00:06.165810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' bc3e4582-8c0c-4ac5-959d-d04b4c938754 '!=' bc3e4582-8c0c-4ac5-959d-d04b4c938754 ']' 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.949 [2024-12-07 02:00:06.209504] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.949 "name": "raid_bdev1", 00:16:00.949 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:16:00.949 "strip_size_kb": 0, 00:16:00.949 "state": "online", 00:16:00.949 "raid_level": "raid1", 00:16:00.949 "superblock": true, 00:16:00.949 "num_base_bdevs": 2, 00:16:00.949 "num_base_bdevs_discovered": 1, 00:16:00.949 "num_base_bdevs_operational": 1, 00:16:00.949 "base_bdevs_list": [ 00:16:00.949 { 00:16:00.949 "name": null, 00:16:00.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.949 "is_configured": false, 00:16:00.949 "data_offset": 0, 00:16:00.949 "data_size": 7936 00:16:00.949 }, 00:16:00.949 { 00:16:00.949 "name": "pt2", 00:16:00.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:00.949 "is_configured": true, 00:16:00.949 "data_offset": 256, 00:16:00.949 "data_size": 7936 00:16:00.949 } 00:16:00.949 ] 00:16:00.949 }' 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.949 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.207 [2024-12-07 02:00:06.640719] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.207 [2024-12-07 02:00:06.640810] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:01.207 [2024-12-07 02:00:06.640911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:01.207 [2024-12-07 02:00:06.640981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:01.207 [2024-12-07 02:00:06.641015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:01.207 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.465 [2024-12-07 02:00:06.716543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.465 [2024-12-07 02:00:06.716662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.465 [2024-12-07 02:00:06.716716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:16:01.465 [2024-12-07 02:00:06.716747] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.465 [2024-12-07 02:00:06.718774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.465 [2024-12-07 02:00:06.718842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.465 [2024-12-07 02:00:06.718920] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:01.465 [2024-12-07 02:00:06.718969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.465 [2024-12-07 02:00:06.719069] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:16:01.465 [2024-12-07 02:00:06.719120] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:01.465 [2024-12-07 02:00:06.719217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:01.465 [2024-12-07 02:00:06.719309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:16:01.465 [2024-12-07 02:00:06.719346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:16:01.465 [2024-12-07 02:00:06.719448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.465 pt2 00:16:01.465 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.466 "name": "raid_bdev1", 00:16:01.466 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:16:01.466 "strip_size_kb": 0, 00:16:01.466 "state": "online", 00:16:01.466 "raid_level": "raid1", 00:16:01.466 "superblock": true, 00:16:01.466 "num_base_bdevs": 2, 00:16:01.466 "num_base_bdevs_discovered": 1, 00:16:01.466 "num_base_bdevs_operational": 1, 00:16:01.466 "base_bdevs_list": [ 00:16:01.466 { 00:16:01.466 "name": null, 00:16:01.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.466 "is_configured": false, 00:16:01.466 "data_offset": 256, 00:16:01.466 "data_size": 7936 00:16:01.466 }, 00:16:01.466 { 00:16:01.466 "name": "pt2", 00:16:01.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.466 "is_configured": true, 00:16:01.466 "data_offset": 256, 00:16:01.466 "data_size": 7936 00:16:01.466 } 00:16:01.466 ] 00:16:01.466 }' 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.466 02:00:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.034 [2024-12-07 02:00:07.211727] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.034 [2024-12-07 02:00:07.211827] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.034 [2024-12-07 02:00:07.211944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.034 [2024-12-07 02:00:07.212014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.034 [2024-12-07 02:00:07.212059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.034 [2024-12-07 02:00:07.275637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.034 [2024-12-07 02:00:07.275759] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.034 [2024-12-07 02:00:07.275820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:16:02.034 [2024-12-07 02:00:07.275859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.034 [2024-12-07 02:00:07.277820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.034 [2024-12-07 02:00:07.277888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.034 [2024-12-07 02:00:07.277963] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.034 [2024-12-07 02:00:07.278031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.034 [2024-12-07 02:00:07.278151] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:02.034 [2024-12-07 02:00:07.278209] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.034 [2024-12-07 02:00:07.278300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:16:02.034 [2024-12-07 02:00:07.278377] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:02.034 [2024-12-07 02:00:07.278482] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:16:02.034 [2024-12-07 02:00:07.278522] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:02.034 [2024-12-07 02:00:07.278634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:02.034 [2024-12-07 02:00:07.278740] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:16:02.034 [2024-12-07 02:00:07.278775] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:16:02.034 [2024-12-07 02:00:07.278878] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.034 pt1 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.034 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.035 "name": "raid_bdev1", 00:16:02.035 "uuid": "bc3e4582-8c0c-4ac5-959d-d04b4c938754", 00:16:02.035 "strip_size_kb": 0, 00:16:02.035 "state": "online", 00:16:02.035 "raid_level": "raid1", 00:16:02.035 "superblock": true, 00:16:02.035 "num_base_bdevs": 2, 00:16:02.035 "num_base_bdevs_discovered": 1, 00:16:02.035 "num_base_bdevs_operational": 1, 00:16:02.035 "base_bdevs_list": [ 00:16:02.035 { 00:16:02.035 "name": null, 00:16:02.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.035 "is_configured": false, 00:16:02.035 "data_offset": 256, 00:16:02.035 "data_size": 7936 00:16:02.035 }, 00:16:02.035 { 00:16:02.035 "name": "pt2", 00:16:02.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.035 "is_configured": true, 00:16:02.035 "data_offset": 256, 00:16:02.035 "data_size": 7936 00:16:02.035 } 00:16:02.035 ] 00:16:02.035 }' 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.035 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.294 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:02.295 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:02.295 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.295 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.295 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.554 [2024-12-07 02:00:07.783028] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' bc3e4582-8c0c-4ac5-959d-d04b4c938754 '!=' bc3e4582-8c0c-4ac5-959d-d04b4c938754 ']' 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98676 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98676 ']' 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98676 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98676 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98676' 00:16:02.554 killing process with pid 98676 00:16:02.554 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 98676 00:16:02.554 [2024-12-07 02:00:07.860329] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.555 [2024-12-07 02:00:07.860443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.555 [2024-12-07 02:00:07.860500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.555 02:00:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 98676 00:16:02.555 [2024-12-07 02:00:07.860511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:16:02.555 [2024-12-07 02:00:07.884633] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:02.814 02:00:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:16:02.814 00:16:02.814 real 0m5.058s 00:16:02.814 user 0m8.266s 00:16:02.814 sys 0m1.111s 00:16:02.814 ************************************ 00:16:02.814 END TEST raid_superblock_test_md_interleaved 00:16:02.814 ************************************ 00:16:02.814 02:00:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.814 02:00:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.814 02:00:08 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:16:02.814 02:00:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:02.814 02:00:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.814 02:00:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:02.814 ************************************ 00:16:02.814 START TEST raid_rebuild_test_sb_md_interleaved 00:16:02.814 ************************************ 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.814 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=98993 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 98993 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 98993 ']' 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.815 02:00:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.075 [2024-12-07 02:00:08.290589] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:03.075 [2024-12-07 02:00:08.290804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.075 Zero copy mechanism will not be used. 00:16:03.075 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98993 ] 00:16:03.075 [2024-12-07 02:00:08.419131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.075 [2024-12-07 02:00:08.467269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.075 [2024-12-07 02:00:08.508665] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.075 [2024-12-07 02:00:08.508784] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 BaseBdev1_malloc 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 [2024-12-07 02:00:09.150310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.015 [2024-12-07 02:00:09.150427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.015 [2024-12-07 02:00:09.150474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:16:04.015 [2024-12-07 02:00:09.150502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.015 [2024-12-07 02:00:09.152440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.015 [2024-12-07 02:00:09.152505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.015 BaseBdev1 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 BaseBdev2_malloc 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 [2024-12-07 02:00:09.190135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:04.015 [2024-12-07 02:00:09.190234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.015 [2024-12-07 02:00:09.190278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:04.015 [2024-12-07 02:00:09.190309] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.015 [2024-12-07 02:00:09.192361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.015 [2024-12-07 02:00:09.192432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.015 BaseBdev2 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 spare_malloc 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 spare_delay 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 [2024-12-07 02:00:09.230717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.015 [2024-12-07 02:00:09.230827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.015 [2024-12-07 02:00:09.230868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:04.015 [2024-12-07 02:00:09.230897] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.015 [2024-12-07 02:00:09.232805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.015 [2024-12-07 02:00:09.232869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.015 spare 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 [2024-12-07 02:00:09.242758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.015 [2024-12-07 02:00:09.244634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.015 [2024-12-07 02:00:09.244866] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:16:04.015 [2024-12-07 02:00:09.244902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:04.015 [2024-12-07 02:00:09.245011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:16:04.015 [2024-12-07 02:00:09.245119] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:16:04.015 [2024-12-07 02:00:09.245171] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:16:04.015 [2024-12-07 02:00:09.245274] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.015 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.015 "name": "raid_bdev1", 00:16:04.015 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:04.015 "strip_size_kb": 0, 00:16:04.015 "state": "online", 00:16:04.015 "raid_level": "raid1", 00:16:04.015 "superblock": true, 00:16:04.015 "num_base_bdevs": 2, 00:16:04.015 "num_base_bdevs_discovered": 2, 00:16:04.015 "num_base_bdevs_operational": 2, 00:16:04.015 "base_bdevs_list": [ 00:16:04.015 { 00:16:04.015 "name": "BaseBdev1", 00:16:04.015 "uuid": "0a99b1c5-3a5a-53f6-bd4b-544257c5a01a", 00:16:04.015 "is_configured": true, 00:16:04.015 "data_offset": 256, 00:16:04.015 "data_size": 7936 00:16:04.015 }, 00:16:04.015 { 00:16:04.015 "name": "BaseBdev2", 00:16:04.015 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:04.015 "is_configured": true, 00:16:04.015 "data_offset": 256, 00:16:04.015 "data_size": 7936 00:16:04.015 } 00:16:04.016 ] 00:16:04.016 }' 00:16:04.016 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.016 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.275 [2024-12-07 02:00:09.694198] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.275 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.535 [2024-12-07 02:00:09.773809] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.535 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.536 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.536 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.536 "name": "raid_bdev1", 00:16:04.536 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:04.536 "strip_size_kb": 0, 00:16:04.536 "state": "online", 00:16:04.536 "raid_level": "raid1", 00:16:04.536 "superblock": true, 00:16:04.536 "num_base_bdevs": 2, 00:16:04.536 "num_base_bdevs_discovered": 1, 00:16:04.536 "num_base_bdevs_operational": 1, 00:16:04.536 "base_bdevs_list": [ 00:16:04.536 { 00:16:04.536 "name": null, 00:16:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.536 "is_configured": false, 00:16:04.536 "data_offset": 0, 00:16:04.536 "data_size": 7936 00:16:04.536 }, 00:16:04.536 { 00:16:04.536 "name": "BaseBdev2", 00:16:04.536 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:04.536 "is_configured": true, 00:16:04.536 "data_offset": 256, 00:16:04.536 "data_size": 7936 00:16:04.536 } 00:16:04.536 ] 00:16:04.536 }' 00:16:04.536 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.536 02:00:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.795 02:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:04.795 02:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.796 02:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.796 [2024-12-07 02:00:10.177157] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:04.796 [2024-12-07 02:00:10.180114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:16:04.796 [2024-12-07 02:00:10.182139] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:04.796 02:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.796 02:00:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.735 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.736 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.736 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.996 "name": "raid_bdev1", 00:16:05.996 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:05.996 "strip_size_kb": 0, 00:16:05.996 "state": "online", 00:16:05.996 "raid_level": "raid1", 00:16:05.996 "superblock": true, 00:16:05.996 "num_base_bdevs": 2, 00:16:05.996 "num_base_bdevs_discovered": 2, 00:16:05.996 "num_base_bdevs_operational": 2, 00:16:05.996 "process": { 00:16:05.996 "type": "rebuild", 00:16:05.996 "target": "spare", 00:16:05.996 "progress": { 00:16:05.996 "blocks": 2560, 00:16:05.996 "percent": 32 00:16:05.996 } 00:16:05.996 }, 00:16:05.996 "base_bdevs_list": [ 00:16:05.996 { 00:16:05.996 "name": "spare", 00:16:05.996 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:05.996 "is_configured": true, 00:16:05.996 "data_offset": 256, 00:16:05.996 "data_size": 7936 00:16:05.996 }, 00:16:05.996 { 00:16:05.996 "name": "BaseBdev2", 00:16:05.996 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:05.996 "is_configured": true, 00:16:05.996 "data_offset": 256, 00:16:05.996 "data_size": 7936 00:16:05.996 } 00:16:05.996 ] 00:16:05.996 }' 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.996 [2024-12-07 02:00:11.324938] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.996 [2024-12-07 02:00:11.387889] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:05.996 [2024-12-07 02:00:11.388043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.996 [2024-12-07 02:00:11.388082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:05.996 [2024-12-07 02:00:11.388104] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.996 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.996 "name": "raid_bdev1", 00:16:05.996 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:05.996 "strip_size_kb": 0, 00:16:05.996 "state": "online", 00:16:05.996 "raid_level": "raid1", 00:16:05.996 "superblock": true, 00:16:05.996 "num_base_bdevs": 2, 00:16:05.996 "num_base_bdevs_discovered": 1, 00:16:05.996 "num_base_bdevs_operational": 1, 00:16:05.996 "base_bdevs_list": [ 00:16:05.996 { 00:16:05.996 "name": null, 00:16:05.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.996 "is_configured": false, 00:16:05.996 "data_offset": 0, 00:16:05.997 "data_size": 7936 00:16:05.997 }, 00:16:05.997 { 00:16:05.997 "name": "BaseBdev2", 00:16:05.997 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:05.997 "is_configured": true, 00:16:05.997 "data_offset": 256, 00:16:05.997 "data_size": 7936 00:16:05.997 } 00:16:05.997 ] 00:16:05.997 }' 00:16:05.997 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.997 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.566 "name": "raid_bdev1", 00:16:06.566 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:06.566 "strip_size_kb": 0, 00:16:06.566 "state": "online", 00:16:06.566 "raid_level": "raid1", 00:16:06.566 "superblock": true, 00:16:06.566 "num_base_bdevs": 2, 00:16:06.566 "num_base_bdevs_discovered": 1, 00:16:06.566 "num_base_bdevs_operational": 1, 00:16:06.566 "base_bdevs_list": [ 00:16:06.566 { 00:16:06.566 "name": null, 00:16:06.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.566 "is_configured": false, 00:16:06.566 "data_offset": 0, 00:16:06.566 "data_size": 7936 00:16:06.566 }, 00:16:06.566 { 00:16:06.566 "name": "BaseBdev2", 00:16:06.566 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:06.566 "is_configured": true, 00:16:06.566 "data_offset": 256, 00:16:06.566 "data_size": 7936 00:16:06.566 } 00:16:06.566 ] 00:16:06.566 }' 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.566 [2024-12-07 02:00:11.955181] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.566 [2024-12-07 02:00:11.958113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:16:06.566 [2024-12-07 02:00:11.960051] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.566 02:00:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.953 02:00:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.953 "name": "raid_bdev1", 00:16:07.953 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:07.953 "strip_size_kb": 0, 00:16:07.953 "state": "online", 00:16:07.953 "raid_level": "raid1", 00:16:07.953 "superblock": true, 00:16:07.953 "num_base_bdevs": 2, 00:16:07.953 "num_base_bdevs_discovered": 2, 00:16:07.953 "num_base_bdevs_operational": 2, 00:16:07.953 "process": { 00:16:07.953 "type": "rebuild", 00:16:07.953 "target": "spare", 00:16:07.953 "progress": { 00:16:07.953 "blocks": 2560, 00:16:07.953 "percent": 32 00:16:07.953 } 00:16:07.953 }, 00:16:07.953 "base_bdevs_list": [ 00:16:07.953 { 00:16:07.953 "name": "spare", 00:16:07.953 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:07.953 "is_configured": true, 00:16:07.953 "data_offset": 256, 00:16:07.953 "data_size": 7936 00:16:07.953 }, 00:16:07.953 { 00:16:07.953 "name": "BaseBdev2", 00:16:07.953 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:07.953 "is_configured": true, 00:16:07.953 "data_offset": 256, 00:16:07.953 "data_size": 7936 00:16:07.953 } 00:16:07.953 ] 00:16:07.953 }' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:07.953 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.953 "name": "raid_bdev1", 00:16:07.953 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:07.953 "strip_size_kb": 0, 00:16:07.953 "state": "online", 00:16:07.953 "raid_level": "raid1", 00:16:07.953 "superblock": true, 00:16:07.953 "num_base_bdevs": 2, 00:16:07.953 "num_base_bdevs_discovered": 2, 00:16:07.953 "num_base_bdevs_operational": 2, 00:16:07.953 "process": { 00:16:07.953 "type": "rebuild", 00:16:07.953 "target": "spare", 00:16:07.953 "progress": { 00:16:07.953 "blocks": 2816, 00:16:07.953 "percent": 35 00:16:07.953 } 00:16:07.953 }, 00:16:07.953 "base_bdevs_list": [ 00:16:07.953 { 00:16:07.953 "name": "spare", 00:16:07.953 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:07.953 "is_configured": true, 00:16:07.953 "data_offset": 256, 00:16:07.953 "data_size": 7936 00:16:07.953 }, 00:16:07.953 { 00:16:07.953 "name": "BaseBdev2", 00:16:07.953 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:07.953 "is_configured": true, 00:16:07.953 "data_offset": 256, 00:16:07.953 "data_size": 7936 00:16:07.953 } 00:16:07.953 ] 00:16:07.953 }' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.953 02:00:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.934 "name": "raid_bdev1", 00:16:08.934 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:08.934 "strip_size_kb": 0, 00:16:08.934 "state": "online", 00:16:08.934 "raid_level": "raid1", 00:16:08.934 "superblock": true, 00:16:08.934 "num_base_bdevs": 2, 00:16:08.934 "num_base_bdevs_discovered": 2, 00:16:08.934 "num_base_bdevs_operational": 2, 00:16:08.934 "process": { 00:16:08.934 "type": "rebuild", 00:16:08.934 "target": "spare", 00:16:08.934 "progress": { 00:16:08.934 "blocks": 5888, 00:16:08.934 "percent": 74 00:16:08.934 } 00:16:08.934 }, 00:16:08.934 "base_bdevs_list": [ 00:16:08.934 { 00:16:08.934 "name": "spare", 00:16:08.934 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:08.934 "is_configured": true, 00:16:08.934 "data_offset": 256, 00:16:08.934 "data_size": 7936 00:16:08.934 }, 00:16:08.934 { 00:16:08.934 "name": "BaseBdev2", 00:16:08.934 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:08.934 "is_configured": true, 00:16:08.934 "data_offset": 256, 00:16:08.934 "data_size": 7936 00:16:08.934 } 00:16:08.934 ] 00:16:08.934 }' 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.934 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.194 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.194 02:00:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.763 [2024-12-07 02:00:15.073304] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:09.763 [2024-12-07 02:00:15.073471] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:09.763 [2024-12-07 02:00:15.073619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.023 "name": "raid_bdev1", 00:16:10.023 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:10.023 "strip_size_kb": 0, 00:16:10.023 "state": "online", 00:16:10.023 "raid_level": "raid1", 00:16:10.023 "superblock": true, 00:16:10.023 "num_base_bdevs": 2, 00:16:10.023 "num_base_bdevs_discovered": 2, 00:16:10.023 "num_base_bdevs_operational": 2, 00:16:10.023 "base_bdevs_list": [ 00:16:10.023 { 00:16:10.023 "name": "spare", 00:16:10.023 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:10.023 "is_configured": true, 00:16:10.023 "data_offset": 256, 00:16:10.023 "data_size": 7936 00:16:10.023 }, 00:16:10.023 { 00:16:10.023 "name": "BaseBdev2", 00:16:10.023 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:10.023 "is_configured": true, 00:16:10.023 "data_offset": 256, 00:16:10.023 "data_size": 7936 00:16:10.023 } 00:16:10.023 ] 00:16:10.023 }' 00:16:10.023 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.283 "name": "raid_bdev1", 00:16:10.283 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:10.283 "strip_size_kb": 0, 00:16:10.283 "state": "online", 00:16:10.283 "raid_level": "raid1", 00:16:10.283 "superblock": true, 00:16:10.283 "num_base_bdevs": 2, 00:16:10.283 "num_base_bdevs_discovered": 2, 00:16:10.283 "num_base_bdevs_operational": 2, 00:16:10.283 "base_bdevs_list": [ 00:16:10.283 { 00:16:10.283 "name": "spare", 00:16:10.283 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:10.283 "is_configured": true, 00:16:10.283 "data_offset": 256, 00:16:10.283 "data_size": 7936 00:16:10.283 }, 00:16:10.283 { 00:16:10.283 "name": "BaseBdev2", 00:16:10.283 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:10.283 "is_configured": true, 00:16:10.283 "data_offset": 256, 00:16:10.283 "data_size": 7936 00:16:10.283 } 00:16:10.283 ] 00:16:10.283 }' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.283 "name": "raid_bdev1", 00:16:10.283 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:10.283 "strip_size_kb": 0, 00:16:10.283 "state": "online", 00:16:10.283 "raid_level": "raid1", 00:16:10.283 "superblock": true, 00:16:10.283 "num_base_bdevs": 2, 00:16:10.283 "num_base_bdevs_discovered": 2, 00:16:10.283 "num_base_bdevs_operational": 2, 00:16:10.283 "base_bdevs_list": [ 00:16:10.283 { 00:16:10.283 "name": "spare", 00:16:10.283 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:10.283 "is_configured": true, 00:16:10.283 "data_offset": 256, 00:16:10.283 "data_size": 7936 00:16:10.283 }, 00:16:10.283 { 00:16:10.283 "name": "BaseBdev2", 00:16:10.283 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:10.283 "is_configured": true, 00:16:10.283 "data_offset": 256, 00:16:10.283 "data_size": 7936 00:16:10.283 } 00:16:10.283 ] 00:16:10.283 }' 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.283 02:00:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.851 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:10.851 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.851 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.851 [2024-12-07 02:00:16.095865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.851 [2024-12-07 02:00:16.095950] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.851 [2024-12-07 02:00:16.096087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.851 [2024-12-07 02:00:16.096174] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.851 [2024-12-07 02:00:16.096224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:16:10.851 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.852 [2024-12-07 02:00:16.171713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.852 [2024-12-07 02:00:16.171817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.852 [2024-12-07 02:00:16.171872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:10.852 [2024-12-07 02:00:16.171906] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.852 [2024-12-07 02:00:16.174053] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.852 [2024-12-07 02:00:16.174124] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.852 [2024-12-07 02:00:16.174204] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:10.852 [2024-12-07 02:00:16.174300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.852 [2024-12-07 02:00:16.174413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.852 spare 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.852 [2024-12-07 02:00:16.274353] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:16:10.852 [2024-12-07 02:00:16.274447] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:16:10.852 [2024-12-07 02:00:16.274626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:16:10.852 [2024-12-07 02:00:16.274782] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:16:10.852 [2024-12-07 02:00:16.274825] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:16:10.852 [2024-12-07 02:00:16.274957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:10.852 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.110 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.110 "name": "raid_bdev1", 00:16:11.110 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:11.110 "strip_size_kb": 0, 00:16:11.110 "state": "online", 00:16:11.110 "raid_level": "raid1", 00:16:11.110 "superblock": true, 00:16:11.110 "num_base_bdevs": 2, 00:16:11.110 "num_base_bdevs_discovered": 2, 00:16:11.110 "num_base_bdevs_operational": 2, 00:16:11.110 "base_bdevs_list": [ 00:16:11.110 { 00:16:11.110 "name": "spare", 00:16:11.110 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:11.110 "is_configured": true, 00:16:11.110 "data_offset": 256, 00:16:11.110 "data_size": 7936 00:16:11.110 }, 00:16:11.110 { 00:16:11.110 "name": "BaseBdev2", 00:16:11.110 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:11.110 "is_configured": true, 00:16:11.110 "data_offset": 256, 00:16:11.110 "data_size": 7936 00:16:11.110 } 00:16:11.110 ] 00:16:11.110 }' 00:16:11.110 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.110 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.369 "name": "raid_bdev1", 00:16:11.369 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:11.369 "strip_size_kb": 0, 00:16:11.369 "state": "online", 00:16:11.369 "raid_level": "raid1", 00:16:11.369 "superblock": true, 00:16:11.369 "num_base_bdevs": 2, 00:16:11.369 "num_base_bdevs_discovered": 2, 00:16:11.369 "num_base_bdevs_operational": 2, 00:16:11.369 "base_bdevs_list": [ 00:16:11.369 { 00:16:11.369 "name": "spare", 00:16:11.369 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:11.369 "is_configured": true, 00:16:11.369 "data_offset": 256, 00:16:11.369 "data_size": 7936 00:16:11.369 }, 00:16:11.369 { 00:16:11.369 "name": "BaseBdev2", 00:16:11.369 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:11.369 "is_configured": true, 00:16:11.369 "data_offset": 256, 00:16:11.369 "data_size": 7936 00:16:11.369 } 00:16:11.369 ] 00:16:11.369 }' 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.369 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.628 [2024-12-07 02:00:16.894571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.628 "name": "raid_bdev1", 00:16:11.628 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:11.628 "strip_size_kb": 0, 00:16:11.628 "state": "online", 00:16:11.628 "raid_level": "raid1", 00:16:11.628 "superblock": true, 00:16:11.628 "num_base_bdevs": 2, 00:16:11.628 "num_base_bdevs_discovered": 1, 00:16:11.628 "num_base_bdevs_operational": 1, 00:16:11.628 "base_bdevs_list": [ 00:16:11.628 { 00:16:11.628 "name": null, 00:16:11.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.628 "is_configured": false, 00:16:11.628 "data_offset": 0, 00:16:11.628 "data_size": 7936 00:16:11.628 }, 00:16:11.628 { 00:16:11.628 "name": "BaseBdev2", 00:16:11.628 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:11.628 "is_configured": true, 00:16:11.628 "data_offset": 256, 00:16:11.628 "data_size": 7936 00:16:11.628 } 00:16:11.628 ] 00:16:11.628 }' 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.628 02:00:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.887 02:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:11.887 02:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.887 02:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:11.887 [2024-12-07 02:00:17.321858] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.887 [2024-12-07 02:00:17.322108] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:11.887 [2024-12-07 02:00:17.322183] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:11.887 [2024-12-07 02:00:17.322244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:11.887 [2024-12-07 02:00:17.325086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:16:11.887 [2024-12-07 02:00:17.327040] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.887 02:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.887 02:00:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.270 "name": "raid_bdev1", 00:16:13.270 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:13.270 "strip_size_kb": 0, 00:16:13.270 "state": "online", 00:16:13.270 "raid_level": "raid1", 00:16:13.270 "superblock": true, 00:16:13.270 "num_base_bdevs": 2, 00:16:13.270 "num_base_bdevs_discovered": 2, 00:16:13.270 "num_base_bdevs_operational": 2, 00:16:13.270 "process": { 00:16:13.270 "type": "rebuild", 00:16:13.270 "target": "spare", 00:16:13.270 "progress": { 00:16:13.270 "blocks": 2560, 00:16:13.270 "percent": 32 00:16:13.270 } 00:16:13.270 }, 00:16:13.270 "base_bdevs_list": [ 00:16:13.270 { 00:16:13.270 "name": "spare", 00:16:13.270 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:13.270 "is_configured": true, 00:16:13.270 "data_offset": 256, 00:16:13.270 "data_size": 7936 00:16:13.270 }, 00:16:13.270 { 00:16:13.270 "name": "BaseBdev2", 00:16:13.270 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:13.270 "is_configured": true, 00:16:13.270 "data_offset": 256, 00:16:13.270 "data_size": 7936 00:16:13.270 } 00:16:13.270 ] 00:16:13.270 }' 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.270 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.270 [2024-12-07 02:00:18.450078] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.270 [2024-12-07 02:00:18.531993] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.270 [2024-12-07 02:00:18.532140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.270 [2024-12-07 02:00:18.532180] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.271 [2024-12-07 02:00:18.532201] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.271 "name": "raid_bdev1", 00:16:13.271 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:13.271 "strip_size_kb": 0, 00:16:13.271 "state": "online", 00:16:13.271 "raid_level": "raid1", 00:16:13.271 "superblock": true, 00:16:13.271 "num_base_bdevs": 2, 00:16:13.271 "num_base_bdevs_discovered": 1, 00:16:13.271 "num_base_bdevs_operational": 1, 00:16:13.271 "base_bdevs_list": [ 00:16:13.271 { 00:16:13.271 "name": null, 00:16:13.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.271 "is_configured": false, 00:16:13.271 "data_offset": 0, 00:16:13.271 "data_size": 7936 00:16:13.271 }, 00:16:13.271 { 00:16:13.271 "name": "BaseBdev2", 00:16:13.271 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:13.271 "is_configured": true, 00:16:13.271 "data_offset": 256, 00:16:13.271 "data_size": 7936 00:16:13.271 } 00:16:13.271 ] 00:16:13.271 }' 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.271 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.843 02:00:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:13.843 02:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.843 02:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:13.843 [2024-12-07 02:00:19.007172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:13.843 [2024-12-07 02:00:19.007286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:13.843 [2024-12-07 02:00:19.007330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:13.843 [2024-12-07 02:00:19.007357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:13.843 [2024-12-07 02:00:19.007576] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:13.843 [2024-12-07 02:00:19.007619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:13.843 [2024-12-07 02:00:19.007715] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:13.843 [2024-12-07 02:00:19.007730] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:13.843 [2024-12-07 02:00:19.007742] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:13.843 [2024-12-07 02:00:19.007764] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:13.843 [2024-12-07 02:00:19.010503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:13.843 [2024-12-07 02:00:19.012451] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.843 spare 00:16:13.843 02:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.843 02:00:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.782 "name": "raid_bdev1", 00:16:14.782 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:14.782 "strip_size_kb": 0, 00:16:14.782 "state": "online", 00:16:14.782 "raid_level": "raid1", 00:16:14.782 "superblock": true, 00:16:14.782 "num_base_bdevs": 2, 00:16:14.782 "num_base_bdevs_discovered": 2, 00:16:14.782 "num_base_bdevs_operational": 2, 00:16:14.782 "process": { 00:16:14.782 "type": "rebuild", 00:16:14.782 "target": "spare", 00:16:14.782 "progress": { 00:16:14.782 "blocks": 2560, 00:16:14.782 "percent": 32 00:16:14.782 } 00:16:14.782 }, 00:16:14.782 "base_bdevs_list": [ 00:16:14.782 { 00:16:14.782 "name": "spare", 00:16:14.782 "uuid": "f405f24a-36e8-51c8-b9fd-cf69fe966308", 00:16:14.782 "is_configured": true, 00:16:14.782 "data_offset": 256, 00:16:14.782 "data_size": 7936 00:16:14.782 }, 00:16:14.782 { 00:16:14.782 "name": "BaseBdev2", 00:16:14.782 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:14.782 "is_configured": true, 00:16:14.782 "data_offset": 256, 00:16:14.782 "data_size": 7936 00:16:14.782 } 00:16:14.782 ] 00:16:14.782 }' 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 [2024-12-07 02:00:20.179222] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.782 [2024-12-07 02:00:20.217266] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.782 [2024-12-07 02:00:20.217334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.782 [2024-12-07 02:00:20.217350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.782 [2024-12-07 02:00:20.217359] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:14.782 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.042 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.042 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.042 "name": "raid_bdev1", 00:16:15.042 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:15.042 "strip_size_kb": 0, 00:16:15.042 "state": "online", 00:16:15.042 "raid_level": "raid1", 00:16:15.042 "superblock": true, 00:16:15.042 "num_base_bdevs": 2, 00:16:15.042 "num_base_bdevs_discovered": 1, 00:16:15.042 "num_base_bdevs_operational": 1, 00:16:15.042 "base_bdevs_list": [ 00:16:15.042 { 00:16:15.042 "name": null, 00:16:15.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.042 "is_configured": false, 00:16:15.042 "data_offset": 0, 00:16:15.043 "data_size": 7936 00:16:15.043 }, 00:16:15.043 { 00:16:15.043 "name": "BaseBdev2", 00:16:15.043 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:15.043 "is_configured": true, 00:16:15.043 "data_offset": 256, 00:16:15.043 "data_size": 7936 00:16:15.043 } 00:16:15.043 ] 00:16:15.043 }' 00:16:15.043 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.043 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.303 "name": "raid_bdev1", 00:16:15.303 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:15.303 "strip_size_kb": 0, 00:16:15.303 "state": "online", 00:16:15.303 "raid_level": "raid1", 00:16:15.303 "superblock": true, 00:16:15.303 "num_base_bdevs": 2, 00:16:15.303 "num_base_bdevs_discovered": 1, 00:16:15.303 "num_base_bdevs_operational": 1, 00:16:15.303 "base_bdevs_list": [ 00:16:15.303 { 00:16:15.303 "name": null, 00:16:15.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.303 "is_configured": false, 00:16:15.303 "data_offset": 0, 00:16:15.303 "data_size": 7936 00:16:15.303 }, 00:16:15.303 { 00:16:15.303 "name": "BaseBdev2", 00:16:15.303 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:15.303 "is_configured": true, 00:16:15.303 "data_offset": 256, 00:16:15.303 "data_size": 7936 00:16:15.303 } 00:16:15.303 ] 00:16:15.303 }' 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:15.303 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:15.564 [2024-12-07 02:00:20.812034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:15.564 [2024-12-07 02:00:20.812177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.564 [2024-12-07 02:00:20.812218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:15.564 [2024-12-07 02:00:20.812254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.564 [2024-12-07 02:00:20.812452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.564 [2024-12-07 02:00:20.812501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:15.564 [2024-12-07 02:00:20.812577] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:15.564 [2024-12-07 02:00:20.812616] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:15.564 [2024-12-07 02:00:20.812655] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:15.564 [2024-12-07 02:00:20.812706] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:15.564 BaseBdev1 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.564 02:00:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.504 "name": "raid_bdev1", 00:16:16.504 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:16.504 "strip_size_kb": 0, 00:16:16.504 "state": "online", 00:16:16.504 "raid_level": "raid1", 00:16:16.504 "superblock": true, 00:16:16.504 "num_base_bdevs": 2, 00:16:16.504 "num_base_bdevs_discovered": 1, 00:16:16.504 "num_base_bdevs_operational": 1, 00:16:16.504 "base_bdevs_list": [ 00:16:16.504 { 00:16:16.504 "name": null, 00:16:16.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.504 "is_configured": false, 00:16:16.504 "data_offset": 0, 00:16:16.504 "data_size": 7936 00:16:16.504 }, 00:16:16.504 { 00:16:16.504 "name": "BaseBdev2", 00:16:16.504 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:16.504 "is_configured": true, 00:16:16.504 "data_offset": 256, 00:16:16.504 "data_size": 7936 00:16:16.504 } 00:16:16.504 ] 00:16:16.504 }' 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.504 02:00:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.074 "name": "raid_bdev1", 00:16:17.074 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:17.074 "strip_size_kb": 0, 00:16:17.074 "state": "online", 00:16:17.074 "raid_level": "raid1", 00:16:17.074 "superblock": true, 00:16:17.074 "num_base_bdevs": 2, 00:16:17.074 "num_base_bdevs_discovered": 1, 00:16:17.074 "num_base_bdevs_operational": 1, 00:16:17.074 "base_bdevs_list": [ 00:16:17.074 { 00:16:17.074 "name": null, 00:16:17.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.074 "is_configured": false, 00:16:17.074 "data_offset": 0, 00:16:17.074 "data_size": 7936 00:16:17.074 }, 00:16:17.074 { 00:16:17.074 "name": "BaseBdev2", 00:16:17.074 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:17.074 "is_configured": true, 00:16:17.074 "data_offset": 256, 00:16:17.074 "data_size": 7936 00:16:17.074 } 00:16:17.074 ] 00:16:17.074 }' 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:17.074 [2024-12-07 02:00:22.401511] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.074 [2024-12-07 02:00:22.401748] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:17.074 [2024-12-07 02:00:22.401803] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:17.074 request: 00:16:17.074 { 00:16:17.074 "base_bdev": "BaseBdev1", 00:16:17.074 "raid_bdev": "raid_bdev1", 00:16:17.074 "method": "bdev_raid_add_base_bdev", 00:16:17.074 "req_id": 1 00:16:17.074 } 00:16:17.074 Got JSON-RPC error response 00:16:17.074 response: 00:16:17.074 { 00:16:17.074 "code": -22, 00:16:17.074 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:17.074 } 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:17.074 02:00:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.014 "name": "raid_bdev1", 00:16:18.014 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:18.014 "strip_size_kb": 0, 00:16:18.014 "state": "online", 00:16:18.014 "raid_level": "raid1", 00:16:18.014 "superblock": true, 00:16:18.014 "num_base_bdevs": 2, 00:16:18.014 "num_base_bdevs_discovered": 1, 00:16:18.014 "num_base_bdevs_operational": 1, 00:16:18.014 "base_bdevs_list": [ 00:16:18.014 { 00:16:18.014 "name": null, 00:16:18.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.014 "is_configured": false, 00:16:18.014 "data_offset": 0, 00:16:18.014 "data_size": 7936 00:16:18.014 }, 00:16:18.014 { 00:16:18.014 "name": "BaseBdev2", 00:16:18.014 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:18.014 "is_configured": true, 00:16:18.014 "data_offset": 256, 00:16:18.014 "data_size": 7936 00:16:18.014 } 00:16:18.014 ] 00:16:18.014 }' 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.014 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.584 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.584 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.585 "name": "raid_bdev1", 00:16:18.585 "uuid": "3e7af249-4d97-4cdb-821e-678147e18aa1", 00:16:18.585 "strip_size_kb": 0, 00:16:18.585 "state": "online", 00:16:18.585 "raid_level": "raid1", 00:16:18.585 "superblock": true, 00:16:18.585 "num_base_bdevs": 2, 00:16:18.585 "num_base_bdevs_discovered": 1, 00:16:18.585 "num_base_bdevs_operational": 1, 00:16:18.585 "base_bdevs_list": [ 00:16:18.585 { 00:16:18.585 "name": null, 00:16:18.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.585 "is_configured": false, 00:16:18.585 "data_offset": 0, 00:16:18.585 "data_size": 7936 00:16:18.585 }, 00:16:18.585 { 00:16:18.585 "name": "BaseBdev2", 00:16:18.585 "uuid": "213f9b47-f75e-5cfa-999e-82c00f26cd96", 00:16:18.585 "is_configured": true, 00:16:18.585 "data_offset": 256, 00:16:18.585 "data_size": 7936 00:16:18.585 } 00:16:18.585 ] 00:16:18.585 }' 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 98993 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 98993 ']' 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 98993 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:18.585 02:00:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98993 00:16:18.585 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:18.585 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:18.585 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98993' 00:16:18.585 killing process with pid 98993 00:16:18.585 Received shutdown signal, test time was about 60.000000 seconds 00:16:18.585 00:16:18.585 Latency(us) 00:16:18.585 [2024-12-07T02:00:24.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.585 [2024-12-07T02:00:24.047Z] =================================================================================================================== 00:16:18.585 [2024-12-07T02:00:24.047Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:18.585 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 98993 00:16:18.585 [2024-12-07 02:00:24.012315] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:18.585 [2024-12-07 02:00:24.012444] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.585 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 98993 00:16:18.585 [2024-12-07 02:00:24.012496] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.585 [2024-12-07 02:00:24.012506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:18.845 [2024-12-07 02:00:24.045827] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:18.845 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:18.845 00:16:18.845 real 0m16.068s 00:16:18.845 user 0m21.476s 00:16:18.845 sys 0m1.553s 00:16:18.845 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:18.845 ************************************ 00:16:18.845 END TEST raid_rebuild_test_sb_md_interleaved 00:16:18.845 ************************************ 00:16:18.845 02:00:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:19.105 02:00:24 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:19.105 02:00:24 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:19.105 02:00:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 98993 ']' 00:16:19.105 02:00:24 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 98993 00:16:19.105 02:00:24 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:19.105 00:16:19.105 real 9m52.040s 00:16:19.105 user 14m3.560s 00:16:19.105 sys 1m45.838s 00:16:19.105 ************************************ 00:16:19.105 END TEST bdev_raid 00:16:19.105 02:00:24 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.105 02:00:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.105 ************************************ 00:16:19.105 02:00:24 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:19.105 02:00:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:19.105 02:00:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.105 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.105 ************************************ 00:16:19.105 START TEST spdkcli_raid 00:16:19.105 ************************************ 00:16:19.105 02:00:24 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:19.105 * Looking for test storage... 00:16:19.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:19.105 02:00:24 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:19.105 02:00:24 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:16:19.105 02:00:24 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:19.365 02:00:24 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:19.365 02:00:24 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:19.366 02:00:24 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.366 02:00:24 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:19.366 02:00:24 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.366 02:00:24 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.366 02:00:24 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.366 02:00:24 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.366 --rc genhtml_branch_coverage=1 00:16:19.366 --rc genhtml_function_coverage=1 00:16:19.366 --rc genhtml_legend=1 00:16:19.366 --rc geninfo_all_blocks=1 00:16:19.366 --rc geninfo_unexecuted_blocks=1 00:16:19.366 00:16:19.366 ' 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.366 --rc genhtml_branch_coverage=1 00:16:19.366 --rc genhtml_function_coverage=1 00:16:19.366 --rc genhtml_legend=1 00:16:19.366 --rc geninfo_all_blocks=1 00:16:19.366 --rc geninfo_unexecuted_blocks=1 00:16:19.366 00:16:19.366 ' 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.366 --rc genhtml_branch_coverage=1 00:16:19.366 --rc genhtml_function_coverage=1 00:16:19.366 --rc genhtml_legend=1 00:16:19.366 --rc geninfo_all_blocks=1 00:16:19.366 --rc geninfo_unexecuted_blocks=1 00:16:19.366 00:16:19.366 ' 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:19.366 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.366 --rc genhtml_branch_coverage=1 00:16:19.366 --rc genhtml_function_coverage=1 00:16:19.366 --rc genhtml_legend=1 00:16:19.366 --rc geninfo_all_blocks=1 00:16:19.366 --rc geninfo_unexecuted_blocks=1 00:16:19.366 00:16:19.366 ' 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:19.366 02:00:24 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99663 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:19.366 02:00:24 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99663 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 99663 ']' 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.366 02:00:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:19.366 [2024-12-07 02:00:24.770540] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:19.366 [2024-12-07 02:00:24.771227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99663 ] 00:16:19.626 [2024-12-07 02:00:24.916923] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.626 [2024-12-07 02:00:24.967851] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.626 [2024-12-07 02:00:24.967946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.192 02:00:25 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.192 02:00:25 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:16:20.192 02:00:25 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:20.192 02:00:25 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.192 02:00:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.192 02:00:25 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:20.192 02:00:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:20.192 02:00:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:20.192 02:00:25 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:20.192 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:20.192 ' 00:16:22.096 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:22.096 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:22.096 02:00:27 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:22.096 02:00:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:22.096 02:00:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.096 02:00:27 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:22.096 02:00:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:22.096 02:00:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.096 02:00:27 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:22.096 ' 00:16:23.034 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:23.293 02:00:28 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:23.293 02:00:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.293 02:00:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.293 02:00:28 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:23.293 02:00:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.293 02:00:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.293 02:00:28 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:23.293 02:00:28 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:23.861 02:00:29 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:23.861 02:00:29 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:23.861 02:00:29 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:23.861 02:00:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:23.861 02:00:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.861 02:00:29 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:23.861 02:00:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:23.861 02:00:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.861 02:00:29 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:23.861 ' 00:16:24.801 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:24.801 02:00:30 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:24.801 02:00:30 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:24.801 02:00:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.061 02:00:30 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:25.061 02:00:30 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.061 02:00:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:25.062 02:00:30 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:25.062 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:25.062 ' 00:16:26.456 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:26.456 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:26.456 02:00:31 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:26.456 02:00:31 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99663 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99663 ']' 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99663 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99663 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99663' 00:16:26.456 killing process with pid 99663 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 99663 00:16:26.456 02:00:31 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 99663 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99663 ']' 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99663 00:16:27.024 02:00:32 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 99663 ']' 00:16:27.024 02:00:32 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 99663 00:16:27.024 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (99663) - No such process 00:16:27.024 02:00:32 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 99663 is not found' 00:16:27.024 Process with pid 99663 is not found 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:27.024 02:00:32 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:27.024 00:16:27.024 real 0m7.782s 00:16:27.024 user 0m16.513s 00:16:27.024 sys 0m1.061s 00:16:27.024 02:00:32 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:27.024 02:00:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:27.024 ************************************ 00:16:27.024 END TEST spdkcli_raid 00:16:27.024 ************************************ 00:16:27.024 02:00:32 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:27.024 02:00:32 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:27.024 02:00:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:27.024 02:00:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.024 ************************************ 00:16:27.024 START TEST blockdev_raid5f 00:16:27.024 ************************************ 00:16:27.024 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:27.024 * Looking for test storage... 00:16:27.024 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:27.024 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:27.024 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:16:27.024 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:27.024 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.024 02:00:32 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:27.284 02:00:32 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.284 02:00:32 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.284 02:00:32 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.284 02:00:32 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:27.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.284 --rc genhtml_branch_coverage=1 00:16:27.284 --rc genhtml_function_coverage=1 00:16:27.284 --rc genhtml_legend=1 00:16:27.284 --rc geninfo_all_blocks=1 00:16:27.284 --rc geninfo_unexecuted_blocks=1 00:16:27.284 00:16:27.284 ' 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:27.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.284 --rc genhtml_branch_coverage=1 00:16:27.284 --rc genhtml_function_coverage=1 00:16:27.284 --rc genhtml_legend=1 00:16:27.284 --rc geninfo_all_blocks=1 00:16:27.284 --rc geninfo_unexecuted_blocks=1 00:16:27.284 00:16:27.284 ' 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:27.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.284 --rc genhtml_branch_coverage=1 00:16:27.284 --rc genhtml_function_coverage=1 00:16:27.284 --rc genhtml_legend=1 00:16:27.284 --rc geninfo_all_blocks=1 00:16:27.284 --rc geninfo_unexecuted_blocks=1 00:16:27.284 00:16:27.284 ' 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:27.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.284 --rc genhtml_branch_coverage=1 00:16:27.284 --rc genhtml_function_coverage=1 00:16:27.284 --rc genhtml_legend=1 00:16:27.284 --rc geninfo_all_blocks=1 00:16:27.284 --rc geninfo_unexecuted_blocks=1 00:16:27.284 00:16:27.284 ' 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=99921 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:27.284 02:00:32 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 99921 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 99921 ']' 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:27.284 02:00:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:27.284 [2024-12-07 02:00:32.602090] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:27.284 [2024-12-07 02:00:32.602785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99921 ] 00:16:27.543 [2024-12-07 02:00:32.746755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.543 [2024-12-07 02:00:32.795038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.112 Malloc0 00:16:28.112 Malloc1 00:16:28.112 Malloc2 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:28.112 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.112 02:00:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "db7978fa-007e-48ad-a72d-9540553ce912"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "db7978fa-007e-48ad-a72d-9540553ce912",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "db7978fa-007e-48ad-a72d-9540553ce912",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "636fcf8e-a0d3-4d82-b8bc-abacd213d44d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d97ab2dd-ac8c-4e36-851e-92d3b554a9dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6e263f01-cd48-4cd8-aefc-bc375d90361a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:28.373 02:00:33 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 99921 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 99921 ']' 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 99921 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99921 00:16:28.373 killing process with pid 99921 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99921' 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 99921 00:16:28.373 02:00:33 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 99921 00:16:28.744 02:00:34 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:28.744 02:00:34 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:28.744 02:00:34 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:28.744 02:00:34 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:28.744 02:00:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:28.744 ************************************ 00:16:28.744 START TEST bdev_hello_world 00:16:28.744 ************************************ 00:16:28.744 02:00:34 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:28.744 [2024-12-07 02:00:34.197909] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:28.744 [2024-12-07 02:00:34.198144] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99955 ] 00:16:29.004 [2024-12-07 02:00:34.343819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.004 [2024-12-07 02:00:34.392607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.264 [2024-12-07 02:00:34.576621] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:29.264 [2024-12-07 02:00:34.576685] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:29.264 [2024-12-07 02:00:34.576701] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:29.264 [2024-12-07 02:00:34.576999] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:29.264 [2024-12-07 02:00:34.577126] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:29.264 [2024-12-07 02:00:34.577167] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:29.264 [2024-12-07 02:00:34.577227] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:29.264 00:16:29.264 [2024-12-07 02:00:34.577254] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:29.523 00:16:29.523 real 0m0.708s 00:16:29.523 user 0m0.394s 00:16:29.523 sys 0m0.199s 00:16:29.523 02:00:34 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:29.523 02:00:34 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:29.523 ************************************ 00:16:29.523 END TEST bdev_hello_world 00:16:29.523 ************************************ 00:16:29.523 02:00:34 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:29.523 02:00:34 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:29.523 02:00:34 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:29.523 02:00:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:29.523 ************************************ 00:16:29.523 START TEST bdev_bounds 00:16:29.523 ************************************ 00:16:29.523 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:29.523 02:00:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=99986 00:16:29.523 02:00:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 99986' 00:16:29.524 Process bdevio pid: 99986 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 99986 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 99986 ']' 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:29.524 02:00:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:29.524 [2024-12-07 02:00:34.970837] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:29.524 [2024-12-07 02:00:34.971057] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99986 ] 00:16:29.784 [2024-12-07 02:00:35.110997] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:29.784 [2024-12-07 02:00:35.161584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.784 [2024-12-07 02:00:35.161680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.784 [2024-12-07 02:00:35.161844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.354 02:00:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:30.354 02:00:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:30.354 02:00:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:30.623 I/O targets: 00:16:30.623 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:30.623 00:16:30.623 00:16:30.623 CUnit - A unit testing framework for C - Version 2.1-3 00:16:30.623 http://cunit.sourceforge.net/ 00:16:30.623 00:16:30.623 00:16:30.623 Suite: bdevio tests on: raid5f 00:16:30.623 Test: blockdev write read block ...passed 00:16:30.623 Test: blockdev write zeroes read block ...passed 00:16:30.623 Test: blockdev write zeroes read no split ...passed 00:16:30.623 Test: blockdev write zeroes read split ...passed 00:16:30.623 Test: blockdev write zeroes read split partial ...passed 00:16:30.623 Test: blockdev reset ...passed 00:16:30.623 Test: blockdev write read 8 blocks ...passed 00:16:30.623 Test: blockdev write read size > 128k ...passed 00:16:30.623 Test: blockdev write read invalid size ...passed 00:16:30.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:30.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:30.623 Test: blockdev write read max offset ...passed 00:16:30.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:30.623 Test: blockdev writev readv 8 blocks ...passed 00:16:30.623 Test: blockdev writev readv 30 x 1block ...passed 00:16:30.623 Test: blockdev writev readv block ...passed 00:16:30.623 Test: blockdev writev readv size > 128k ...passed 00:16:30.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:30.623 Test: blockdev comparev and writev ...passed 00:16:30.623 Test: blockdev nvme passthru rw ...passed 00:16:30.623 Test: blockdev nvme passthru vendor specific ...passed 00:16:30.623 Test: blockdev nvme admin passthru ...passed 00:16:30.623 Test: blockdev copy ...passed 00:16:30.623 00:16:30.623 Run Summary: Type Total Ran Passed Failed Inactive 00:16:30.623 suites 1 1 n/a 0 0 00:16:30.623 tests 23 23 23 0 0 00:16:30.623 asserts 130 130 130 0 n/a 00:16:30.623 00:16:30.623 Elapsed time = 0.316 seconds 00:16:30.623 0 00:16:30.623 02:00:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 99986 00:16:30.623 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 99986 ']' 00:16:30.623 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 99986 00:16:30.623 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:30.623 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:30.623 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99986 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99986' 00:16:30.882 killing process with pid 99986 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 99986 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 99986 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:30.882 00:16:30.882 real 0m1.449s 00:16:30.882 user 0m3.503s 00:16:30.882 sys 0m0.314s 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:30.882 02:00:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:30.882 ************************************ 00:16:30.882 END TEST bdev_bounds 00:16:30.882 ************************************ 00:16:31.141 02:00:36 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:31.141 02:00:36 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:31.141 02:00:36 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:31.141 02:00:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:31.141 ************************************ 00:16:31.141 START TEST bdev_nbd 00:16:31.141 ************************************ 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:31.141 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100029 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100029 /var/tmp/spdk-nbd.sock 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 100029 ']' 00:16:31.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:31.142 02:00:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:31.142 [2024-12-07 02:00:36.512073] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:31.142 [2024-12-07 02:00:36.512271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.401 [2024-12-07 02:00:36.655099] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.401 [2024-12-07 02:00:36.704656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:31.969 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:32.228 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.229 1+0 records in 00:16:32.229 1+0 records out 00:16:32.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224516 s, 18.2 MB/s 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:32.229 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:32.489 { 00:16:32.489 "nbd_device": "/dev/nbd0", 00:16:32.489 "bdev_name": "raid5f" 00:16:32.489 } 00:16:32.489 ]' 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:32.489 { 00:16:32.489 "nbd_device": "/dev/nbd0", 00:16:32.489 "bdev_name": "raid5f" 00:16:32.489 } 00:16:32.489 ]' 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.489 02:00:37 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.748 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.009 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:33.269 /dev/nbd0 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.269 1+0 records in 00:16:33.269 1+0 records out 00:16:33.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258182 s, 15.9 MB/s 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.269 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:33.529 { 00:16:33.529 "nbd_device": "/dev/nbd0", 00:16:33.529 "bdev_name": "raid5f" 00:16:33.529 } 00:16:33.529 ]' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:33.529 { 00:16:33.529 "nbd_device": "/dev/nbd0", 00:16:33.529 "bdev_name": "raid5f" 00:16:33.529 } 00:16:33.529 ]' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:33.529 256+0 records in 00:16:33.529 256+0 records out 00:16:33.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012621 s, 83.1 MB/s 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:33.529 256+0 records in 00:16:33.529 256+0 records out 00:16:33.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0309348 s, 33.9 MB/s 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.529 02:00:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:33.789 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:34.048 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:34.049 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:34.049 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:34.049 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:34.049 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:34.308 malloc_lvol_verify 00:16:34.308 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:34.567 a5d44971-9a97-4486-a15a-7714296a9816 00:16:34.567 02:00:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:34.828 4c3129d3-9ad3-4104-a309-d893ddd41a34 00:16:34.828 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:34.828 /dev/nbd0 00:16:34.828 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:34.828 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:34.828 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:34.828 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:34.828 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:34.828 mke2fs 1.47.0 (5-Feb-2023) 00:16:34.828 Discarding device blocks: 0/4096 done 00:16:34.828 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:34.828 00:16:34.828 Allocating group tables: 0/1 done 00:16:34.828 Writing inode tables: 0/1 done 00:16:34.828 Creating journal (1024 blocks): done 00:16:34.828 Writing superblocks and filesystem accounting information: 0/1 done 00:16:34.828 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100029 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 100029 ']' 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 100029 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100029 00:16:35.087 killing process with pid 100029 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100029' 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 100029 00:16:35.087 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 100029 00:16:35.347 02:00:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:35.347 ************************************ 00:16:35.347 END TEST bdev_nbd 00:16:35.347 ************************************ 00:16:35.347 00:16:35.347 real 0m4.400s 00:16:35.347 user 0m6.547s 00:16:35.347 sys 0m1.120s 00:16:35.347 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.347 02:00:40 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:35.607 02:00:40 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:35.607 02:00:40 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:16:35.607 02:00:40 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:16:35.607 02:00:40 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:16:35.607 02:00:40 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:35.607 02:00:40 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.607 02:00:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:35.607 ************************************ 00:16:35.607 START TEST bdev_fio 00:16:35.607 ************************************ 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:35.607 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:16:35.607 02:00:40 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:35.607 ************************************ 00:16:35.607 START TEST bdev_fio_rw_verify 00:16:35.607 ************************************ 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:16:35.607 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:35.868 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:35.868 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:35.868 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:16:35.868 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:35.868 02:00:41 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:35.868 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:35.868 fio-3.35 00:16:35.868 Starting 1 thread 00:16:48.085 00:16:48.085 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100218: Sat Dec 7 02:00:51 2024 00:16:48.085 read: IOPS=11.3k, BW=44.0MiB/s (46.1MB/s)(440MiB/10001msec) 00:16:48.085 slat (nsec): min=18813, max=63370, avg=21026.72, stdev=2346.41 00:16:48.085 clat (usec): min=10, max=419, avg=141.86, stdev=50.91 00:16:48.085 lat (usec): min=30, max=466, avg=162.89, stdev=51.39 00:16:48.085 clat percentiles (usec): 00:16:48.085 | 50.000th=[ 145], 99.000th=[ 249], 99.900th=[ 285], 99.990th=[ 367], 00:16:48.085 | 99.999th=[ 416] 00:16:48.085 write: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(455MiB/9881msec); 0 zone resets 00:16:48.085 slat (usec): min=8, max=225, avg=18.26, stdev= 4.23 00:16:48.085 clat (usec): min=59, max=1392, avg=324.21, stdev=50.41 00:16:48.085 lat (usec): min=76, max=1605, avg=342.46, stdev=51.93 00:16:48.085 clat percentiles (usec): 00:16:48.085 | 50.000th=[ 326], 99.000th=[ 445], 99.900th=[ 652], 99.990th=[ 1237], 00:16:48.085 | 99.999th=[ 1352] 00:16:48.085 bw ( KiB/s): min=43184, max=50504, per=98.92%, avg=46684.21, stdev=1886.68, samples=19 00:16:48.085 iops : min=10796, max=12626, avg=11671.05, stdev=471.67, samples=19 00:16:48.085 lat (usec) : 20=0.01%, 50=0.01%, 100=11.91%, 250=39.67%, 500=48.25% 00:16:48.085 lat (usec) : 750=0.13%, 1000=0.02% 00:16:48.085 lat (msec) : 2=0.02% 00:16:48.085 cpu : usr=98.97%, sys=0.42%, ctx=38, majf=0, minf=12424 00:16:48.085 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.085 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.085 issued rwts: total=112652,116584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.085 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:48.085 00:16:48.085 Run status group 0 (all jobs): 00:16:48.085 READ: bw=44.0MiB/s (46.1MB/s), 44.0MiB/s-44.0MiB/s (46.1MB/s-46.1MB/s), io=440MiB (461MB), run=10001-10001msec 00:16:48.085 WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=455MiB (478MB), run=9881-9881msec 00:16:48.085 ----------------------------------------------------- 00:16:48.085 Suppressions used: 00:16:48.085 count bytes template 00:16:48.085 1 7 /usr/src/fio/parse.c 00:16:48.085 430 41280 /usr/src/fio/iolog.c 00:16:48.085 1 8 libtcmalloc_minimal.so 00:16:48.085 1 904 libcrypto.so 00:16:48.085 ----------------------------------------------------- 00:16:48.085 00:16:48.085 00:16:48.085 real 0m11.180s 00:16:48.085 user 0m11.383s 00:16:48.085 sys 0m0.625s 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:48.085 ************************************ 00:16:48.085 END TEST bdev_fio_rw_verify 00:16:48.085 ************************************ 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "db7978fa-007e-48ad-a72d-9540553ce912"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "db7978fa-007e-48ad-a72d-9540553ce912",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "db7978fa-007e-48ad-a72d-9540553ce912",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "636fcf8e-a0d3-4d82-b8bc-abacd213d44d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "d97ab2dd-ac8c-4e36-851e-92d3b554a9dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "6e263f01-cd48-4cd8-aefc-bc375d90361a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:48.085 /home/vagrant/spdk_repo/spdk 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:48.085 00:16:48.085 real 0m11.463s 00:16:48.085 user 0m11.511s 00:16:48.085 sys 0m0.753s 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.085 02:00:52 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:48.085 ************************************ 00:16:48.085 END TEST bdev_fio 00:16:48.085 ************************************ 00:16:48.085 02:00:52 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:48.085 02:00:52 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:48.085 02:00:52 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:48.085 02:00:52 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.085 02:00:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:48.085 ************************************ 00:16:48.085 START TEST bdev_verify 00:16:48.085 ************************************ 00:16:48.085 02:00:52 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:48.085 [2024-12-07 02:00:52.474740] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:48.085 [2024-12-07 02:00:52.474851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100374 ] 00:16:48.085 [2024-12-07 02:00:52.612091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:48.085 [2024-12-07 02:00:52.663628] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.085 [2024-12-07 02:00:52.663767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.085 Running I/O for 5 seconds... 00:16:49.595 15929.00 IOPS, 62.22 MiB/s [2024-12-07T02:00:55.994Z] 15475.00 IOPS, 60.45 MiB/s [2024-12-07T02:00:56.933Z] 15326.33 IOPS, 59.87 MiB/s [2024-12-07T02:00:57.873Z] 15619.25 IOPS, 61.01 MiB/s [2024-12-07T02:00:57.873Z] 15641.40 IOPS, 61.10 MiB/s 00:16:52.411 Latency(us) 00:16:52.411 [2024-12-07T02:00:57.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.411 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:52.412 Verification LBA range: start 0x0 length 0x2000 00:16:52.412 raid5f : 5.01 7777.81 30.38 0.00 0.00 24676.50 198.54 21635.47 00:16:52.412 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:52.412 Verification LBA range: start 0x2000 length 0x2000 00:16:52.412 raid5f : 5.01 7857.00 30.69 0.00 0.00 24442.04 228.95 21520.99 00:16:52.412 [2024-12-07T02:00:57.874Z] =================================================================================================================== 00:16:52.412 [2024-12-07T02:00:57.874Z] Total : 15634.81 61.07 0.00 0.00 24558.76 198.54 21635.47 00:16:52.674 00:16:52.674 real 0m5.733s 00:16:52.674 user 0m10.689s 00:16:52.674 sys 0m0.217s 00:16:52.674 02:00:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:52.674 02:00:58 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:52.674 ************************************ 00:16:52.674 END TEST bdev_verify 00:16:52.674 ************************************ 00:16:52.933 02:00:58 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:52.933 02:00:58 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:52.933 02:00:58 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:52.933 02:00:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:52.933 ************************************ 00:16:52.933 START TEST bdev_verify_big_io 00:16:52.933 ************************************ 00:16:52.934 02:00:58 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:52.934 [2024-12-07 02:00:58.265226] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:52.934 [2024-12-07 02:00:58.265386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100456 ] 00:16:53.194 [2024-12-07 02:00:58.417547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:53.194 [2024-12-07 02:00:58.470690] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.194 [2024-12-07 02:00:58.470809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.454 Running I/O for 5 seconds... 00:16:55.331 760.00 IOPS, 47.50 MiB/s [2024-12-07T02:01:01.733Z] 887.00 IOPS, 55.44 MiB/s [2024-12-07T02:01:03.112Z] 908.33 IOPS, 56.77 MiB/s [2024-12-07T02:01:04.052Z] 935.75 IOPS, 58.48 MiB/s [2024-12-07T02:01:04.052Z] 951.80 IOPS, 59.49 MiB/s 00:16:58.590 Latency(us) 00:16:58.590 [2024-12-07T02:01:04.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.590 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:58.590 Verification LBA range: start 0x0 length 0x200 00:16:58.590 raid5f : 5.25 483.70 30.23 0.00 0.00 6549862.93 155.61 285725.51 00:16:58.590 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:58.590 Verification LBA range: start 0x200 length 0x200 00:16:58.590 raid5f : 5.21 487.99 30.50 0.00 0.00 6442102.87 143.99 282062.37 00:16:58.590 [2024-12-07T02:01:04.052Z] =================================================================================================================== 00:16:58.590 [2024-12-07T02:01:04.052Z] Total : 971.69 60.73 0.00 0.00 6495961.68 143.99 285725.51 00:16:58.850 ************************************ 00:16:58.850 END TEST bdev_verify_big_io 00:16:58.850 ************************************ 00:16:58.850 00:16:58.850 real 0m5.975s 00:16:58.850 user 0m11.164s 00:16:58.850 sys 0m0.213s 00:16:58.850 02:01:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:58.850 02:01:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.850 02:01:04 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:58.850 02:01:04 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:58.850 02:01:04 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:58.850 02:01:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:58.850 ************************************ 00:16:58.850 START TEST bdev_write_zeroes 00:16:58.850 ************************************ 00:16:58.850 02:01:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:58.850 [2024-12-07 02:01:04.300968] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:58.850 [2024-12-07 02:01:04.301084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100543 ] 00:16:59.109 [2024-12-07 02:01:04.447126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.109 [2024-12-07 02:01:04.497502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.369 Running I/O for 1 seconds... 00:17:00.306 27399.00 IOPS, 107.03 MiB/s 00:17:00.306 Latency(us) 00:17:00.306 [2024-12-07T02:01:05.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.306 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:17:00.306 raid5f : 1.01 27378.32 106.95 0.00 0.00 4660.99 1523.93 6439.13 00:17:00.306 [2024-12-07T02:01:05.768Z] =================================================================================================================== 00:17:00.306 [2024-12-07T02:01:05.768Z] Total : 27378.32 106.95 0.00 0.00 4660.99 1523.93 6439.13 00:17:00.573 00:17:00.573 real 0m1.725s 00:17:00.573 user 0m1.400s 00:17:00.573 sys 0m0.205s 00:17:00.573 02:01:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:00.573 02:01:05 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:17:00.573 ************************************ 00:17:00.573 END TEST bdev_write_zeroes 00:17:00.573 ************************************ 00:17:00.573 02:01:05 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:00.573 02:01:05 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:00.573 02:01:06 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:00.573 02:01:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:00.573 ************************************ 00:17:00.573 START TEST bdev_json_nonenclosed 00:17:00.573 ************************************ 00:17:00.573 02:01:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:00.842 [2024-12-07 02:01:06.092924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:00.842 [2024-12-07 02:01:06.093031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100574 ] 00:17:00.842 [2024-12-07 02:01:06.236380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.842 [2024-12-07 02:01:06.289970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.842 [2024-12-07 02:01:06.290056] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:17:00.843 [2024-12-07 02:01:06.290084] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:00.843 [2024-12-07 02:01:06.290105] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:01.112 00:17:01.112 real 0m0.395s 00:17:01.112 user 0m0.177s 00:17:01.112 sys 0m0.114s 00:17:01.112 02:01:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.112 02:01:06 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:17:01.112 ************************************ 00:17:01.112 END TEST bdev_json_nonenclosed 00:17:01.112 ************************************ 00:17:01.112 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:01.112 02:01:06 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:17:01.112 02:01:06 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:01.112 02:01:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:01.112 ************************************ 00:17:01.112 START TEST bdev_json_nonarray 00:17:01.112 ************************************ 00:17:01.112 02:01:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:17:01.112 [2024-12-07 02:01:06.547306] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:01.112 [2024-12-07 02:01:06.547430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100605 ] 00:17:01.434 [2024-12-07 02:01:06.690995] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.434 [2024-12-07 02:01:06.741971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.434 [2024-12-07 02:01:06.742090] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:17:01.434 [2024-12-07 02:01:06.742118] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:17:01.434 [2024-12-07 02:01:06.742134] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:01.434 00:17:01.434 real 0m0.389s 00:17:01.434 user 0m0.169s 00:17:01.434 sys 0m0.117s 00:17:01.434 02:01:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.434 ************************************ 00:17:01.434 END TEST bdev_json_nonarray 00:17:01.434 02:01:06 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:17:01.434 ************************************ 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:17:01.694 02:01:06 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:17:01.694 00:17:01.694 real 0m34.653s 00:17:01.694 user 0m47.501s 00:17:01.694 sys 0m4.232s 00:17:01.694 02:01:06 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.694 02:01:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:17:01.694 ************************************ 00:17:01.694 END TEST blockdev_raid5f 00:17:01.694 ************************************ 00:17:01.694 02:01:06 -- spdk/autotest.sh@194 -- # uname -s 00:17:01.694 02:01:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:17:01.694 02:01:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:01.694 02:01:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:17:01.694 02:01:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:06 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:06 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:01.694 02:01:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:01.694 02:01:06 -- common/autotest_common.sh@10 -- # set +x 00:17:01.694 02:01:07 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:17:01.694 02:01:07 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:17:01.694 02:01:07 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:17:01.694 02:01:07 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:17:01.694 02:01:07 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:17:01.694 02:01:07 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:17:01.694 02:01:07 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:17:01.694 02:01:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:01.694 02:01:07 -- common/autotest_common.sh@10 -- # set +x 00:17:01.694 02:01:07 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:17:01.694 02:01:07 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:17:01.694 02:01:07 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:17:01.694 02:01:07 -- common/autotest_common.sh@10 -- # set +x 00:17:03.597 INFO: APP EXITING 00:17:03.597 INFO: killing all VMs 00:17:03.597 INFO: killing vhost app 00:17:03.597 INFO: EXIT DONE 00:17:03.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:03.856 Waiting for block devices as requested 00:17:04.115 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:04.115 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:05.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:05.051 Cleaning 00:17:05.051 Removing: /var/run/dpdk/spdk0/config 00:17:05.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:05.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:05.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:05.051 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:05.051 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:05.051 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:05.051 Removing: /dev/shm/spdk_tgt_trace.pid68860 00:17:05.051 Removing: /var/run/dpdk/spdk0 00:17:05.051 Removing: /var/run/dpdk/spdk_pid100209 00:17:05.051 Removing: /var/run/dpdk/spdk_pid100374 00:17:05.051 Removing: /var/run/dpdk/spdk_pid100456 00:17:05.051 Removing: /var/run/dpdk/spdk_pid100543 00:17:05.051 Removing: /var/run/dpdk/spdk_pid100574 00:17:05.051 Removing: /var/run/dpdk/spdk_pid100605 00:17:05.051 Removing: /var/run/dpdk/spdk_pid68697 00:17:05.051 Removing: /var/run/dpdk/spdk_pid68860 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69062 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69149 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69178 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69294 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69306 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69490 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69569 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69654 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69748 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69829 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69869 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69905 00:17:05.051 Removing: /var/run/dpdk/spdk_pid69975 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70087 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70518 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70564 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70612 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70628 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70699 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70709 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70773 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70789 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70831 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70849 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70891 00:17:05.051 Removing: /var/run/dpdk/spdk_pid70909 00:17:05.051 Removing: /var/run/dpdk/spdk_pid71049 00:17:05.051 Removing: /var/run/dpdk/spdk_pid71080 00:17:05.051 Removing: /var/run/dpdk/spdk_pid71169 00:17:05.051 Removing: /var/run/dpdk/spdk_pid72332 00:17:05.311 Removing: /var/run/dpdk/spdk_pid72533 00:17:05.311 Removing: /var/run/dpdk/spdk_pid72662 00:17:05.311 Removing: /var/run/dpdk/spdk_pid73267 00:17:05.311 Removing: /var/run/dpdk/spdk_pid73462 00:17:05.311 Removing: /var/run/dpdk/spdk_pid73596 00:17:05.311 Removing: /var/run/dpdk/spdk_pid74201 00:17:05.311 Removing: /var/run/dpdk/spdk_pid74509 00:17:05.311 Removing: /var/run/dpdk/spdk_pid74638 00:17:05.311 Removing: /var/run/dpdk/spdk_pid75979 00:17:05.311 Removing: /var/run/dpdk/spdk_pid76221 00:17:05.311 Removing: /var/run/dpdk/spdk_pid76350 00:17:05.311 Removing: /var/run/dpdk/spdk_pid77682 00:17:05.311 Removing: /var/run/dpdk/spdk_pid77922 00:17:05.311 Removing: /var/run/dpdk/spdk_pid78057 00:17:05.311 Removing: /var/run/dpdk/spdk_pid79392 00:17:05.311 Removing: /var/run/dpdk/spdk_pid79821 00:17:05.311 Removing: /var/run/dpdk/spdk_pid79956 00:17:05.311 Removing: /var/run/dpdk/spdk_pid81381 00:17:05.311 Removing: /var/run/dpdk/spdk_pid81634 00:17:05.311 Removing: /var/run/dpdk/spdk_pid81763 00:17:05.311 Removing: /var/run/dpdk/spdk_pid83193 00:17:05.311 Removing: /var/run/dpdk/spdk_pid83441 00:17:05.311 Removing: /var/run/dpdk/spdk_pid83570 00:17:05.311 Removing: /var/run/dpdk/spdk_pid84996 00:17:05.311 Removing: /var/run/dpdk/spdk_pid85466 00:17:05.311 Removing: /var/run/dpdk/spdk_pid85601 00:17:05.311 Removing: /var/run/dpdk/spdk_pid85728 00:17:05.311 Removing: /var/run/dpdk/spdk_pid86129 00:17:05.311 Removing: /var/run/dpdk/spdk_pid86851 00:17:05.311 Removing: /var/run/dpdk/spdk_pid87211 00:17:05.311 Removing: /var/run/dpdk/spdk_pid87893 00:17:05.311 Removing: /var/run/dpdk/spdk_pid88318 00:17:05.311 Removing: /var/run/dpdk/spdk_pid89053 00:17:05.311 Removing: /var/run/dpdk/spdk_pid89445 00:17:05.311 Removing: /var/run/dpdk/spdk_pid91359 00:17:05.311 Removing: /var/run/dpdk/spdk_pid91792 00:17:05.311 Removing: /var/run/dpdk/spdk_pid92211 00:17:05.311 Removing: /var/run/dpdk/spdk_pid94240 00:17:05.311 Removing: /var/run/dpdk/spdk_pid94710 00:17:05.311 Removing: /var/run/dpdk/spdk_pid95191 00:17:05.311 Removing: /var/run/dpdk/spdk_pid96227 00:17:05.311 Removing: /var/run/dpdk/spdk_pid96539 00:17:05.311 Removing: /var/run/dpdk/spdk_pid97449 00:17:05.311 Removing: /var/run/dpdk/spdk_pid97766 00:17:05.311 Removing: /var/run/dpdk/spdk_pid98676 00:17:05.311 Removing: /var/run/dpdk/spdk_pid98993 00:17:05.311 Removing: /var/run/dpdk/spdk_pid99663 00:17:05.311 Removing: /var/run/dpdk/spdk_pid99921 00:17:05.311 Removing: /var/run/dpdk/spdk_pid99955 00:17:05.311 Removing: /var/run/dpdk/spdk_pid99986 00:17:05.311 Clean 00:17:05.571 02:01:10 -- common/autotest_common.sh@1451 -- # return 0 00:17:05.571 02:01:10 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:17:05.571 02:01:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.571 02:01:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.571 02:01:10 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:17:05.571 02:01:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.571 02:01:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.571 02:01:10 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:05.571 02:01:10 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:17:05.571 02:01:10 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:17:05.571 02:01:10 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:17:05.571 02:01:10 -- spdk/autotest.sh@394 -- # hostname 00:17:05.571 02:01:10 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:17:05.830 geninfo: WARNING: invalid characters removed from testname! 00:17:27.776 02:01:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:30.358 02:01:35 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:32.263 02:01:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:34.169 02:01:39 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:36.710 02:01:41 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:38.620 02:01:43 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:40.529 02:01:45 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:40.529 02:01:45 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:17:40.529 02:01:45 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:17:40.529 02:01:45 -- common/autotest_common.sh@1681 -- $ lcov --version 00:17:40.529 02:01:45 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:17:40.529 02:01:45 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:17:40.529 02:01:45 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:17:40.529 02:01:45 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:17:40.529 02:01:45 -- scripts/common.sh@336 -- $ IFS=.-: 00:17:40.529 02:01:45 -- scripts/common.sh@336 -- $ read -ra ver1 00:17:40.529 02:01:45 -- scripts/common.sh@337 -- $ IFS=.-: 00:17:40.529 02:01:45 -- scripts/common.sh@337 -- $ read -ra ver2 00:17:40.529 02:01:45 -- scripts/common.sh@338 -- $ local 'op=<' 00:17:40.529 02:01:45 -- scripts/common.sh@340 -- $ ver1_l=2 00:17:40.529 02:01:45 -- scripts/common.sh@341 -- $ ver2_l=1 00:17:40.529 02:01:45 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:17:40.529 02:01:45 -- scripts/common.sh@344 -- $ case "$op" in 00:17:40.529 02:01:45 -- scripts/common.sh@345 -- $ : 1 00:17:40.529 02:01:45 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:17:40.529 02:01:45 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:40.529 02:01:45 -- scripts/common.sh@365 -- $ decimal 1 00:17:40.529 02:01:45 -- scripts/common.sh@353 -- $ local d=1 00:17:40.529 02:01:45 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:17:40.529 02:01:45 -- scripts/common.sh@355 -- $ echo 1 00:17:40.529 02:01:45 -- scripts/common.sh@365 -- $ ver1[v]=1 00:17:40.529 02:01:45 -- scripts/common.sh@366 -- $ decimal 2 00:17:40.529 02:01:45 -- scripts/common.sh@353 -- $ local d=2 00:17:40.529 02:01:45 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:17:40.529 02:01:45 -- scripts/common.sh@355 -- $ echo 2 00:17:40.529 02:01:45 -- scripts/common.sh@366 -- $ ver2[v]=2 00:17:40.529 02:01:45 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:17:40.529 02:01:45 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:17:40.529 02:01:45 -- scripts/common.sh@368 -- $ return 0 00:17:40.529 02:01:45 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:40.529 02:01:45 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:17:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.529 --rc genhtml_branch_coverage=1 00:17:40.529 --rc genhtml_function_coverage=1 00:17:40.529 --rc genhtml_legend=1 00:17:40.529 --rc geninfo_all_blocks=1 00:17:40.529 --rc geninfo_unexecuted_blocks=1 00:17:40.529 00:17:40.529 ' 00:17:40.529 02:01:45 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:17:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.529 --rc genhtml_branch_coverage=1 00:17:40.529 --rc genhtml_function_coverage=1 00:17:40.529 --rc genhtml_legend=1 00:17:40.529 --rc geninfo_all_blocks=1 00:17:40.529 --rc geninfo_unexecuted_blocks=1 00:17:40.529 00:17:40.529 ' 00:17:40.529 02:01:45 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:17:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.529 --rc genhtml_branch_coverage=1 00:17:40.529 --rc genhtml_function_coverage=1 00:17:40.529 --rc genhtml_legend=1 00:17:40.529 --rc geninfo_all_blocks=1 00:17:40.529 --rc geninfo_unexecuted_blocks=1 00:17:40.529 00:17:40.529 ' 00:17:40.529 02:01:45 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:17:40.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:40.529 --rc genhtml_branch_coverage=1 00:17:40.529 --rc genhtml_function_coverage=1 00:17:40.529 --rc genhtml_legend=1 00:17:40.529 --rc geninfo_all_blocks=1 00:17:40.529 --rc geninfo_unexecuted_blocks=1 00:17:40.529 00:17:40.529 ' 00:17:40.529 02:01:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:40.529 02:01:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:40.529 02:01:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:40.529 02:01:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.530 02:01:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.530 02:01:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.530 02:01:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.530 02:01:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.530 02:01:45 -- paths/export.sh@5 -- $ export PATH 00:17:40.530 02:01:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.530 02:01:45 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:40.530 02:01:45 -- common/autobuild_common.sh@479 -- $ date +%s 00:17:40.530 02:01:45 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733536905.XXXXXX 00:17:40.530 02:01:45 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733536905.pIPop4 00:17:40.530 02:01:45 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:17:40.530 02:01:45 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:17:40.530 02:01:45 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:17:40.530 02:01:45 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:17:40.530 02:01:45 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:40.530 02:01:45 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:40.530 02:01:45 -- common/autobuild_common.sh@495 -- $ get_config_params 00:17:40.530 02:01:45 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:17:40.530 02:01:45 -- common/autotest_common.sh@10 -- $ set +x 00:17:40.530 02:01:45 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:17:40.530 02:01:45 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:17:40.530 02:01:45 -- pm/common@17 -- $ local monitor 00:17:40.530 02:01:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:40.530 02:01:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:40.530 02:01:45 -- pm/common@25 -- $ sleep 1 00:17:40.530 02:01:45 -- pm/common@21 -- $ date +%s 00:17:40.530 02:01:45 -- pm/common@21 -- $ date +%s 00:17:40.530 02:01:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733536905 00:17:40.530 02:01:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733536905 00:17:40.530 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733536905_collect-vmstat.pm.log 00:17:40.530 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733536905_collect-cpu-load.pm.log 00:17:41.471 02:01:46 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:17:41.471 02:01:46 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:17:41.471 02:01:46 -- spdk/autopackage.sh@14 -- $ timing_finish 00:17:41.471 02:01:46 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:41.471 02:01:46 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:41.471 02:01:46 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:41.471 02:01:46 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:17:41.471 02:01:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:17:41.471 02:01:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:17:41.471 02:01:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:41.471 02:01:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:17:41.471 02:01:46 -- pm/common@44 -- $ pid=102093 00:17:41.471 02:01:46 -- pm/common@50 -- $ kill -TERM 102093 00:17:41.471 02:01:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:41.471 02:01:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:17:41.471 02:01:46 -- pm/common@44 -- $ pid=102095 00:17:41.471 02:01:46 -- pm/common@50 -- $ kill -TERM 102095 00:17:41.471 + [[ -n 6163 ]] 00:17:41.471 + sudo kill 6163 00:17:41.481 [Pipeline] } 00:17:41.497 [Pipeline] // timeout 00:17:41.502 [Pipeline] } 00:17:41.517 [Pipeline] // stage 00:17:41.523 [Pipeline] } 00:17:41.537 [Pipeline] // catchError 00:17:41.548 [Pipeline] stage 00:17:41.550 [Pipeline] { (Stop VM) 00:17:41.564 [Pipeline] sh 00:17:41.849 + vagrant halt 00:17:44.390 ==> default: Halting domain... 00:17:52.543 [Pipeline] sh 00:17:52.824 + vagrant destroy -f 00:17:55.371 ==> default: Removing domain... 00:17:55.402 [Pipeline] sh 00:17:55.717 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:55.727 [Pipeline] } 00:17:55.742 [Pipeline] // stage 00:17:55.748 [Pipeline] } 00:17:55.763 [Pipeline] // dir 00:17:55.768 [Pipeline] } 00:17:55.783 [Pipeline] // wrap 00:17:55.789 [Pipeline] } 00:17:55.802 [Pipeline] // catchError 00:17:55.812 [Pipeline] stage 00:17:55.814 [Pipeline] { (Epilogue) 00:17:55.828 [Pipeline] sh 00:17:56.112 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:18:01.402 [Pipeline] catchError 00:18:01.404 [Pipeline] { 00:18:01.416 [Pipeline] sh 00:18:01.699 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:18:01.699 Artifacts sizes are good 00:18:01.708 [Pipeline] } 00:18:01.723 [Pipeline] // catchError 00:18:01.736 [Pipeline] archiveArtifacts 00:18:01.744 Archiving artifacts 00:18:01.840 [Pipeline] cleanWs 00:18:01.852 [WS-CLEANUP] Deleting project workspace... 00:18:01.852 [WS-CLEANUP] Deferred wipeout is used... 00:18:01.859 [WS-CLEANUP] done 00:18:01.861 [Pipeline] } 00:18:01.877 [Pipeline] // stage 00:18:01.882 [Pipeline] } 00:18:01.896 [Pipeline] // node 00:18:01.902 [Pipeline] End of Pipeline 00:18:01.944 Finished: SUCCESS